When Salesforce is life!

Tag: Apex

[Salesforce / Apex / VisualForce] Getting rid of the “Save & New” button

In some circumstances the “Save & New” button on the standard layout is not desired.

This is an idea (there are others of this kind) to make this optional on layouts, but unfortunately it’s been freezed for years.

The simpler solution was to execute some javascript on a hone page component in order to hide every button that has the “save_new” name:
Remove Save & New button of the standard edit page.

With Summer ’15 you cannot use anymore custom home page components that contains script tags, so the hack is not valid anymore: if you try to create/edit an HTML Area Custom Component you get this info message:

This component contains code that is no longer supported. For security reasons, in Summer ’15 we will start removing non-supported code from HTML Area home page components. As a result, this component may stop working properly.

If you want to use JavaScript or other advanced HTML elements in your home page component, we recommend that you create a Visualforce Area component instead.

I had to find a solution for a client, and the “New” button override was the way.

The button I’m talking about is this one (on the Editing standard layout):

You simply have to override the “New” button on your selected objects, creating for each object a Visual Force page with a standard Controller and the following extension:

<apex:page standardController="Account" extensions="SObjectNewActionOverrideController" action="{!onLoadPage}">
</apex:page>

public class SObjectNewActionOverrideController {
    public String debug{get;Set;}
    private String SObjectPrefix = null;
    public SObjectNewActionOverrideController(ApexPages.StandardController controller){
        this.SObjectPrefix = controller.getRecord().getSObjectType().getDescribe().getKeyPrefix();
        
    }
    public PageReference onLoadPage(){
        this.debug = JSON.serializePretty(ApexPages.currentPage().getParameters());
        String retURL = ApexPages.currentPage().getParameters().get('retURL');
        //these are the conditions to understand that this is actually a "new" page
        //that comes from a previously "save" page in which the user clicked on "Save & New"
        if(ApexPages.currentPage().getParameters().get('save_new')=='1' 
           && retURL != null
           && retURL.startsWith('/'+SObjectPrefix)
           && retURL.indexOf('/', 4) < 0
           && !retURL.contains('?')
           && retURL.length() >= 15){
               PageReference pg = new PAgeReference(retURL);
               pg.setRedirect(true);
               return pg;
           }else{
               PageReference pg = new PAgeReference('/'+this.SObjectPrefix+'/e');
               for(String key : ApexPages.currentPage().getParameters().keyset()){
                   if(key == 'save_new' || key == 'sfdc.override') continue;
                   pg.getParameters().put(key, ApexPages.currentPage().getParameters().get(key));
               }
               //https://mindfiresfdcprofessionals.wordpress.com/2013/07/10/override-standard-button-in-specific-condition/
               pg.getParameters().put('nooverride','1');
               pg.setRedirect(true);
               return pg;
           }
    }
}

The code simply does a redirect to the “previously” edited record if we are coming from a save operation (and user clicks on “Save & New”), getting rid to the “New” action of the “Save & New”, or simply goes to the standard edit page (the retURL is evaluated: if it is not an object ID then goes to the New action).

As stated in the code, there are other places in which the user clicks on a “New” button but the “save_new” parameter is still passed by Salesforce (I don’t understand the reason though), but this time the user should see the “New” page.

These buttons are the “New” button on the Sobject main page (e.g. /001/o page):

And the related lists:

Comments and alternative solutions are really appreciated!

[Salesforce / SSO] Implementing Delegated SSO

a{color:red !important;}

Playing with code is cool, but playing with useless stuff is even better 🙂

Ok, I’m kidding, I just want to say that sometimes you have to get your hands dirty to understand what lies underneath things and try to build useless stuff to see simple “Hello world!” appear!

This is the case of Delegated SSO.

Few weeks ago with Paolo (a colleage of mine), I was checking deeper on Salesforce SSO, trying to figure out from the docs how to implement it.

The first thing that came across my eyes was the difference between Delegated and Federated SSO.
It wans’t that clear at that time, that’s why Paolo played with the code for some time and did a cool thing that I reproduced in the following Github repo.

Federated SSO is done using well known protocols such as SAML, granting a secure identity provisioning: with Microsoft ADFS you can use your own company domain to log on your Salesforce CRMs as well.

Which is the problem with this kind of SSO?

To implement its bases you don’t need a single line of code.

Too bad we are dirty men that like to play with mud!

That’s why this post!

With Delegated SSO you need 2 actors:

  • An Identity provider (e.g. your Domain server)
  • The Salesforce ORG in which you want to be logged in without remembering username/password

The first wall you splash into is the fact the you have your ORG to be enabled to Delegated SSO: this should be enabled by Salesforce support, so you need a way to contact support (if you’re not using an ORG with built in support).

After your ORG is enabled to Delegated SSO, this is where the config has to be enabled in your “delegated” ORG:

The Delegated Gateway URL contains the URL of the webservice of the “delegating” server.

Then you have to enable the Is Single Sign-On Enabled flag in the Administrative Permissions section of your users’ profile.

This is where we got our hands dirty while making out research: why not using a Salesforce ORG as the identity provider?

Challenge accepted!

What happens with delegated SSO? When you try to log in into your delegated ORG, Salesforce at first try to access the “Delegated SSO” webservice: if this accept the request, than you are automatically logged in; if the server gives a KO, than the ORG checks for username/password to be correct.

This is the message that the delegated ORG sends to the identity provider:

<?xml version="1.0" encoding="UTF-8" ?>
<soapenv:Envelope
   xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <Authenticate xmlns="urn:authentication.soap.sforce.com">
         <username>[email protected]</username>
         <password>myPassword99</password>
         <sourceIp>1.2.3.4</sourceIp>
      </Authenticate>
   </soapenv:Body>
</soapenv:Envelope>

It basically asks for a user/password couple with a source IP, and receives this response:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope 
   xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <AuthenticateResult xmlns="urn:authentication.soap.sforce.com">
         <Authenticated>false</Authenticated>
      </AuthenticateResult>
   </soapenv:Body>
</soapenv:Envelope>

The Authenticated field conveys the OK/KO result.

You can use the password field to host a unique and temporary token to make the connection more secure.

To log-in from your identity provider page, use this example page (see the /apex/DelegateLogin page):

Each of this users is a delegated user on another ORG, that is stored in the DelegatedUser__c SObject:

It stores Username and Remote ORG ID, because it is used to create the login URL to make the authentication go smootly for the user:

public PageReference delegateAuthentication(){
 String password = generateGUID();
 insert new DelegatedToken__c(Token__c = password, 
         Username__c = this.usr.DelegatedUsername__c,
         RequestIP__c = getCurrentIP());
 String url = 'https://login.salesforce.com/login.jsp?'
    +'un='+EncodingUtil.urlEncode(this.usr.DelegatedUsername__c, 'utf8')
    +'&orgId='+this.usr.Delegated_ORG_ID__c 
    +'&pw='+EncodingUtil.urlEncode(password, 'utf8')
    +'&rememberUn=0&jse=0';
 //you can also setup a startURL, logoutURL and ssoStartPage parameters to enhance usre experience
 PageReference page = new PageReference(url);
 page.setRedirect(false);
 return page;
}

This way you can request a login action for every user you have stored in your objects (this case has the same ORG but you can have wathever ORG you want, no limits); the token is stored in a DelegatedToken__c SObject that is used to handle temporary tokens, usernames and IPs: this way, when the delegated ORG asks our ORG with this infos, our webservice can succesfully authenticate the requesting user.

This is done through the public webservice exposed by the RESTDelegatedAuthenticator class:

    @HttpPost
    global static void getOpenCases() {
        RestResponse response = RestContext.response;
        response.statusCode = 200;
        response.addHeader('Content-Type', 'application/xml');
        Boolean authResult = false;
        try{
            Dom.Document doc = new DOM.Document(); 
            doc.load(RestContext.request.requestBody.toString());  
            DOM.XMLNode root = doc.getRootElement();
            Map<String,String> requestValues = walkThrough(root);
            
            
            authResult = checkCredentials(requestValues.get('username'), 
                                          requestValues.get('password'),
                                          requestValues.get('sourceIp'));
        }catcH(Exception e){
            insert new Log__c(Description__c = e.getStackTraceString()+'n'+e.getMessage(), 
                       Request__c = RestContext.request.requestBody.toString());
        }finally{
            insert new Log__c(Description__c = 'Result:'+authResult, 
                       Request__c = RestContext.request.requestBody.toString());
        }
        String soapResp = '<?xml version="1.0" encoding="UTF-8"?>'
            +'<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">'
            +'<soapenv:Body>'
            +'<authenticateresult xmlns="urn:authentication.soap.sforce.com">'
            +'<authenticated>'+authResult+'</Authenticated>'
            +'</AuthenticateResult>'
            +'</soapenv:Body>'
            +'</soapenv:Envelope>';
        response.responseBody = Blob.valueOf(soapResp);
    }

This webservice simply checks the incoming SOAP XML request, extracts the fields on the request and tests its values with the checkCredentials() method.

If the token is not expired you’ll be succesfully redirected to the new ORG logged as the user you wanted to.

A good practice is to use custom domains: you can thus replace “login.salesforce.com” with your “My Domain” of the corresponding ORG (you can also add a new field on the DelegatedUser__c SObject.

To enable public webservice, you simlpy need to create a new Site:

Then click on your Site, click the button “Public Access Setting” and add the RESTDelegatedAuthenticator to the Apex classes accessible by this public profile.

The comple code is right here in this Github repository.

May the Force.com be with you!

[Salesforce] My favorite Spring ’15 features

a{color:red !important;}
h3{color:red !important;}

This is my top 10 list of the Spring ’15 Force.com platform update’s features.

Make Long-Running Callouts from a Visualforce Page

This is a really important feature. Imagine you have big VisualForce pages in which Apex methods, triggered by buttons / links / rerenders, do one or more callouts to external services.

Imagine hundreds of users use that page and the external services go down, thus causing timeouts on the callouts.

This could lead to heavy problems, because there can be no more than 10 long running Apex processes at the same time: this leads to unexpected and horrible output errors.

Using this new feature we can now “asynchronize” synchronous Apex callouts, using the new Connection Apex class, that is used like a callback when the callout has ended.
Basically you create a method split in 2 parts: the first (invoked by a button / link) has the starting callout code and the last receives the results from the callout.
Even if the callout/callouts (you can trigger up to 3 callout a time) fails / go timed out this call won’t count towards the logn running processes.

I’m gonna write down a post to show how this works using HttpRequests and also SOAP requests.

Set Up Test Data for an Entire Test Class

This is an extremely useful feature for large ORGs with hundreds of test classes.

This allow us to write “test setup” methods (new annotation @testSetup) that are called once per test class: when a test method is executed the context has all the objects created in those setup methods (you can have more than one method of this kind, but the execution order is not guaranteed). Each test method execution rollbacks its data at the end, so that the following class’s test method will see the values as if they were just been created.

Deploy Your Components in Less Time

Imagine you have the GO for a deploy in your production ORG but you have to do it at saturday midnight (and test methods requires more than an hour to run!!!!)…this is absolutely not cool!

This feature allow you to send a deploy to production before your deadline, test classes are run during this “freezed” deploy and it stays freezed for 4 days: when you are ready you simply click the “Quick Deploy” button and the components are deployed instantly! Whoooaa this is magic!

Create Named Credentials to Define Callout Endpoints and Their Authentication Settings

You can finally leave all the authentication mess to the Force.com platform and just sit on your screen and simply code you business requirement, magically storing sessions, tokens and whatever it is needed on Salesforce.

You can also use a named credential in a simplyfied way to store callout endpoints.

Business Continuity – Promote Business Continuity with Organization Sync

With Organization Sync you can setup a secondary ORG that you can use when your primary ORG needs some maintenance (e.g. a big release of your developments), giving a 24/7 service availability.

Orgs are synched automatically through a data replication mapping.

I haven’t already tested this feature, but being available on developer Orgs I’ll certainly try it soon.

Visually Automate Your Business Processes

This is the Lightning Process Builder, a cool visual tool that help you automates your business processes, from simple actions to complex combinations.

It seems really awesome but unfortunately it is not available on the pre-release program.
You better see in the next days tens of posts about this new feature!

Lightning Components (BETA)

Lightning components are still in beta and the builder in pilot.
But we have new additions:

  • New Components: brand new base components (such as select input, textarea, …)
  • Namespace Requirement Removed: finally no need to setup a namespace, easing the creation of packages and the deploy across orgs
  • Support Added for Default Namespace: follows the previous point
  • Extend Lightning Components and Apps: like classes, you can extend components and apps
  • Referential Integrity Validation Added: integrity validation has been boosted

New increased limits

There are some new increased limits:

  • Deploy and Retrieve More Components: limit increased from 5000 to 10000 components
  • Chain More Jobs with Queueable Apex: from 2 chained jobs to infinity (except for Developer and Trial orgs where the limit is 5)
  • Make Apex Callouts with More Data: size per callout (HTTP or SOAP) increased from 3 MB to 6 MB for synchronous APEX and 12 MB for asynchronous

Create or Edit Records Owned by Inactive Users

All users can now edit objects owned by inactive Users, before Spring ’15 only administrator could do it!
Believe me, this is really usefull.

Legitimize User Activity using Login Forensics (PILOT)

These are forensics metrics to identify suspicious behavior of users (such as logins from unusual IPs or excessive number of login among the average number).
This is a PILOT program, so you have to explicitly ask Salesforce to be enabled.

As usual, may the FOrce.com be with you guys!

[Salesforce / Apex] Retrieving zipped static resource files from code

Some days ago one of my awesome colleagues asked me: “Can you get a zipped file into a static resource from Apex?”.
My very first thought was NO.

But after having said that syllable, I understood that this could be possible, using an HTTP GET call + cookies + correct resource URL.

This is the solution I came into:

Http h = new Http();
HttpRequest request = new HttpRequest();
request.setEndpoint(URL.getSalesforceBaseUrl().toExternalForm()+'resource/ZIPPEDRESOURCE/file.ext');
request.setMethod('GET');
request.setHeader('Cookie','sid='+UserInfo.getSessionId()+';');
request.setTimeout(60000);
HttpResponse request = h.send(request);
if(request.getStatusCode() != 200){
    //handle the error
    throw new CustomException('Unable to load resource');    
}
//now you can get the content
String fileContent = request.getBody();
//or
Blob fileContentAsBlob = request.getBodyAsBlob();

The last thing to do is to enable your instance URL from Setup > Remote Site Settings, allowing https://xxx.salesforce.com but also I suggest https://c.xxx.visual.force.com and wathever it makes sense (there could be some URL path I haven’t though to).

No surprise that this could be used to get any resource in the CRM (with the proper URL handling).

[Salesforce / Apex] Queueable interfaces – Unleash the async power!

The next Winter ’15 release came with the new Queueable interface.

I wanted to go deep on this, and tried to apply its features to a real case.

If you are (like me) in a TLDR; state of mind, click here.

The main difference between future methods (remember the @future annotation? ) and queueable jobs are:

  • When you enqueue a new job you get a job id (that you can actually monitor)…you got it, like batch jobs or scheduled jobs!
  • You can enqueue a queueable job inside a queueable job (you cannot call a future method inside a future method!)
  • You can have complex Objects (such as SObjects or Apex Objects) in the job context (@future only supports primitive data types)

I wanted to show a pratical use case for this new feature.

Imagine you have a business flow in which you have to send a callout whenever a Case is closed.
Let’s assume the callout will be a REST POST method that accepts a json body with all the non-null Case fields that are filled exactly when the Case is closed (the endpoint of the service will be a simple RequestBin).

Using a future method we would pass the case ID to the job and so make a subsequent SOQL query: this is against the requirement to pass the fields we have in the case at the exact time of the update.
This may seem an exageration, but with big Orgs and hundreds of future methods in execution (due to system overload) future methods can be triggered after minutes and so the ticket state can be different from when the future was actually triggered.

For this implementation we will use a Callout__c Sobject with the following fields:

  • Case__c: master/detail on Case
  • Job_ID__c: external ID / unique / case sensitive, stores the job id
  • Send_on__c: date/time, when the callout has taken place
  • Duration__c: integer, milliseconds for the callout to be completed
  • Status__c: picklist, valued are Queued (default), OK (response 200), KO (response != 200) or Failed (exception)
  • Response__c: long text, stores the server response

Let’s start with the trigger:

    trigger CaseQueueableTrigger on Case (after insert, after update) {

    List calloutsScheduled = new List();
    for(Integer i = 0; i < Trigger.new.size(); i++){
        if((Trigger.isInsert || 
           Trigger.new[i].Status != Trigger.old[i].Status)
            && Trigger.new[i].Status == 'Closed' )
        {
            ID jobID = System.enqueueJob(new CaseQueuebleJob(Trigger.new[i]));
            calloutsScheduled.add(new Callout__c(Job_ID__c = jobID, 
                                                 Case__c = Trigger.new[i].Id,
                                                Status__c = 'Queued'));
        }
    }
    if(calloutsScheduled.size()>0){
        insert calloutsScheduled;
    }
}

The code iterates through the trigger cases and if they are created as “Closed” or the Status field changes to “Closed” a new job is enqueued and a Callout__c object is created.

This way we always have evidence on the system that the callout has been fired.

Let’s watch the job code

    public class CaseQueuebleJob implements Queueable, Database.AllowsCallouts {
    . . .
    }

The Database.AllowsCallouts allow to send a callout in the job.

Next thing is a simple constructor:

    /*
     * Case passed on class creation (the actual ticket from the Trigger)
     */
    private Case ticket{get;Set;}
    
    /*
     * Constructor
     */
    public CaseQueuebleJob(Case ticket){
        this.ticket = ticket;
    }

And this is the content of the interface’s execute method:

    
     // Interface method. 
     // Creates the map of non-null Case fields, gets the Callout__c object
     // depending on current context JobID.
     // In case of failure, the job is queued again.
     
    public void execute(QueueableContext context) {
        //1 - creates the callout payload
        String reqBody = JSON.serialize(createFromCase(this.ticket));
        
        //2 - gets the already created Callout__c object
        Callout__c currentCallout = [Select Id, Status__c, Sent_on__c, Response__c, Case__c,
                                     Job_ID__c From Callout__c Where Job_ID__c = :context.getJobId()];
        
        //3 - starting time (to get Duration__c)
        Long start = System.now().getTime();
        
        //4 - tries to make the REST call
        try{
            Http h = new Http();
            HttpRequest request = new HttpRequest();
            request.setMethod('POST');
            //change this to another bin @ http://requestb.in
            request.setEndpoint('http://requestb.in/nigam7ni');
            request.setTimeout(60000);
            request.setBody(reqBody);
            HttpResponse response = h.send(request);
            
            //4a - Response OK
            if(response.getStatusCode() == 200){
                currentCallout.status__c = 'OK';
            //4b - Reponse KO
            }else{
                currentCallout.status__c = 'KO';
            }
            //4c - saves the response body
            currentCallout.Response__c = response.getBody();
        }catch(Exception e){
            //5 - callout failed (e.g. timeout)
            currentCallout.status__c = 'Failed';
            currentCallout.Response__c = e.getStackTraceString().replace('n',' / ')+' - '+e.getMessage();
            
            //6 - it would have been cool to reschedule the job again :(
            /*
             * Apprently this cannot be done due to "Maximum callout depth has been reached." exception
            ID jobID = System.enqueueJob(new CaseQueuebleJob(this.ticket));
            Callout__c retry = new Callout__c(Job_ID__c = jobID, 
                                                 Case__c = this.ticket.Id,
                                                Status__c = 'Queued');
            insert retry;
            */
        }
        //7 - sets various info about the job
        currentCallout.Sent_on__c = System.now();
        currentCallout.Duration__c = system.now().getTime()-start;
        update currentCallout;
        
        //8 - created an Attachment with the request sent (it could be used to manually send it again with a bonification tool)
        Attachment att = new Attachment(Name = 'request.json', 
                                        Body = Blob.valueOf(reqBody), 
                                        ContentType='application/json',
                                       ParentId = currentCallout.Id);
        insert att;
    }

These are the steps:

  1. Creates the callout JSON payload to be sent (watch the method in the provided github repo) for more details (nothing more than a describe and a map)
  2. Gets the Callout__c object created by the trigger (and using the context’s Job ID)
  3. Gets the starting time of the callout being executed (to calculate the duration)
  4. Tries to make the rest call

    1. Server responded with a 200 OK
    2. Server responded with a non OK status (e.g. 400, 500)
    3. Saves the response body in the Response__c field
  5. Callout failed, so fills the Respose__c field with the stacktrace of the exception (believe me this is super usefull when trying to get what happened, expecially when you have other triggers / code in the OK branch of the code)
  6. Unfortunately if you try to enqueue another job after a callout is done you get the following error Maximum callout depth has been reached., this is apparently not documented, but it should be related by the fact that you can have only 2 jobs in the queue chain, so apparently if you queue the same job you get this error.
    This way the job would have tried to enqueue another equal job for future execution.
  7. Sets time fields on the Callout__c object
  8. Finally creates an Attachment object with the JSON request done: this way it can be expected, knowing the precise state of the Case object sent and can be re-submitted using a re-submission tool that uses the same code (Batch job?).

This is a simple Callout__c object on CRM:

And this is an example request:

{
    "values": {
        "lastmodifiedbyid": "005w0000003fj35AAA",
        "businesshoursid": "01mw00000009wh7AAA",
        "engineeringreqnumber": "767145",
        "casenumber": "00001001",
        "product": "GC1060",
        "planid": "a05w000000Gpig7AAB",
        "ownerid": "005w0000003fj35AAA",
        "createddate": "2014-08-09T09:54:17.000Z",
        "origin": "Phone",
        "isescalated": false,
        "status": "Closed",
        "slaviolation": "Yes",
        "accountid": "001w0000019wqEIAAY",
        "systemmodstamp": "2014-11-03T19:33:31.000Z",
        "isdeleted": false,
        "priority": "High",
        "id": "500w000000fqNRaAAM",
        "lastmodifieddate": "2014-11-03T19:33:31.000Z",
        "isclosedoncreate": true,
        "createdbyid": "005w0000003fj35AAA",
        "contactid": "003w000001EetwEAAR",
        "type": "Electrical",
        "closeddate": "2013-06-20T18:59:51.000Z",
        "subject": "Performance inadequate for second consecutive week",
        "reason": "Performance",
        "potentialliability": "Yes",
        "isclosed": true
    }
}

The code and the related metadata is available on this GitHub repo.

Page 3 of 3

Powered by WordPress & Theme by Anders Norén