When Salesforce is life!

Tag: Salesforce

[Salesforce / JS] Download automatically files from apex (using href link and base64)

This post is a recap of this Salesforce Developer Forum thread.

We want to trigger a download from an attachment but the running user don’t have access to the object (think of Community User for instance).

In the controller read the Attachment’s body in a String getter coded in Base64:

public String base64Value{get;set;}
public String contentType{get;set;}
public void loadAttachment(){
   Attachment att = [Select Id, Body, ContentType From Attachment limit 1];
   base64Value = EncodingUtil.base64Encode(att.Body);
   contentType = att.ContentType;
}

In the page:

<a href="data:{!contentType};content-disposition:attachment;base64,{!base64Value}">Download file</a>

[Salesforce / Lightning] Loading scripts with RequireJS

a{color:red !important;}

UPDATE

Due to the introduction of the ltng:require, this post is no more a valid solution. Refer to the official Lightning documentation.

The following is a post that summarizes this blog post‘s solution (if you are not a TLDR; reader you will find there all my trials and errors).

Dreamforce 2014 came out with new awesome features on the Salesfoce platform: one of the most interesting has been the introduction of the new Lightning framework for fast development of reusable components.
In addition to the (still in pilot) Lightning App Builder (an amazing drag & drop tool for easy creation of apps), it’s not hard to figure out how far this new technology will lead the Force.com platform.

The main reason you want to develop a new Lightning component is the fact that you can compose components like a puzzle making them communicating using events: read the official development guide for more details on how to build your first lightning component.

One of the things you’ll be doing while developing new components, is using external libraries to give more and more features to your applications.
You’ll face the following constraints:

  • You can only get external libraries loaded from a static resource
  • You cannot use the {!$Resource.resourceName} expression because we are not inside a VisualForce page, so you have to simply refer to “/resource/[resourceName]” in your <script> tags

When loading more than one external scripts, sometimes happens that the order of loading is not the expected one (even if you have inserted the script tags in the right order).

You could use RequireJS library to overcome this problem (being sure that he’ll be responsible for loading libraries in the correct order).

But the question is: who is responsible to load RequireJS script?

Who will be executing the RequireJS loading code ONLY AFTER the library has been loaded?

Trying to get the right answer (see more details on the “fail” steps here and the community thread that brought me to the solution), I came to the following solution.

The solution is simple (to ease its understanding): you can improve it to add more features (e.g. style sheets loading) and to make a component out of it.

This is the main code (all the code has been packaged into this GitHub repository):

BlogRequireJSDinamic.app

<aura:application>
        <aura:handler event="forcelogic2:BlogRequireJSEvent" action="{!c.initJS}"/>
        <aura:registerEvent type="forcelogic2:BlogRequireJSEvent" name="requireJSEvent"/>
        <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
        <div id="afterLoad">Old value</div>
    </aura:application>

BlogRequireJSDinamicController.app

({
    /*
        Sets up the RequireJS library (async load)
    */
    doInit : function(component, event, helper){
        
        if (typeof require !== "undefined") {
            var evt = $A.get("e.forcelogic2:BlogRequireJSEvent");
            evt.fire();
        } else {
            var head = document.getElementsByTagName('head')[0];
            var script = document.createElement('script');
            
            script.src = "/resource/RequireJS"; 
            script.type = 'text/javascript';
            script.key = "/resource/RequireJS"; 
            script.helper = this;
            script.id = "script_" + component.getGlobalId();
            var hlp = helper;
            script.onload = function scriptLoaded(){
                var evt = $A.get("e.forcelogic2:BlogRequireJSEvent");
                evt.fire();
            };
            head.appendChild(script);
        }
    },
    
    initJS : function(component, event, helper){
        require.config({
            paths: {
                "jquery": "/resource/BlogScripts/jquery.min.js?",
                "bootstrap": "/resource/BlogScripts/boostrap.min.js?"
            }
        });
        console.log("RequiresJS has been loaded? "+(require !== "undefined"));
        //loading libraries sequentially
        require(["jquery"], function($) {
            console.log("jQuery has been loaded? "+($ !== "undefined"));
            require(["bootstrap"], function(bootstrap) {
                console.log("bootstrap has been loaded? "+(bootstrap !== "undefined"));

                $A.run(function(){
                    //do whatever GUI initialization you want
                    //in the aura context
                    $("#afterLoad").html("VALUE CHANGED!!!");
                });
                
            });//require end
        });//require end
    }
})

This is what you’ll get in the developer console of your browser:

RequiresJS has been loaded? true
    jQuery has been loaded? true
    bootstrap has been loaded? true 

And you will see “Old value” string replace with “VALUE CHANGED!!!” in the page body.

What has happened?

When the app loads (init event) the doInit function tries to understand if the RequireJS library has been loaded.
If so fires a BlogRequireJSEvent event.
If not yet loaded, dynamically creates a <script> tag with the path to RequireJS, binding an onload handler, which in fact will inform that the library has been loaded, using the same event.

The same app is also an handler for the BlogRequireJSEvent event trhough the initJS function: it will load sequentially jQuery and Bootstrap libraries: this way you are pretty sure libraries will be loaded in the correct order.

The next step is to make a component out of this app so you can use a RequireJS loader component in all your apps and make all your components handling the BlogRequireJSEvent event.

[Salesforce / Apex] Retrieving zipped static resource files from code

Some days ago one of my awesome colleagues asked me: “Can you get a zipped file into a static resource from Apex?”.
My very first thought was NO.

But after having said that syllable, I understood that this could be possible, using an HTTP GET call + cookies + correct resource URL.

This is the solution I came into:

Http h = new Http();
HttpRequest request = new HttpRequest();
request.setEndpoint(URL.getSalesforceBaseUrl().toExternalForm()+'resource/ZIPPEDRESOURCE/file.ext');
request.setMethod('GET');
request.setHeader('Cookie','sid='+UserInfo.getSessionId()+';');
request.setTimeout(60000);
HttpResponse request = h.send(request);
if(request.getStatusCode() != 200){
    //handle the error
    throw new CustomException('Unable to load resource');    
}
//now you can get the content
String fileContent = request.getBody();
//or
Blob fileContentAsBlob = request.getBodyAsBlob();

The last thing to do is to enable your instance URL from Setup > Remote Site Settings, allowing https://xxx.salesforce.com but also I suggest https://c.xxx.visual.force.com and wathever it makes sense (there could be some URL path I haven’t though to).

No surprise that this could be used to get any resource in the CRM (with the proper URL handling).

[Salesforce / Apex] Queueable interfaces – Unleash the async power!

The next Winter ’15 release came with the new Queueable interface.

I wanted to go deep on this, and tried to apply its features to a real case.

If you are (like me) in a TLDR; state of mind, click here.

The main difference between future methods (remember the @future annotation? ) and queueable jobs are:

  • When you enqueue a new job you get a job id (that you can actually monitor)…you got it, like batch jobs or scheduled jobs!
  • You can enqueue a queueable job inside a queueable job (you cannot call a future method inside a future method!)
  • You can have complex Objects (such as SObjects or Apex Objects) in the job context (@future only supports primitive data types)

I wanted to show a pratical use case for this new feature.

Imagine you have a business flow in which you have to send a callout whenever a Case is closed.
Let’s assume the callout will be a REST POST method that accepts a json body with all the non-null Case fields that are filled exactly when the Case is closed (the endpoint of the service will be a simple RequestBin).

Using a future method we would pass the case ID to the job and so make a subsequent SOQL query: this is against the requirement to pass the fields we have in the case at the exact time of the update.
This may seem an exageration, but with big Orgs and hundreds of future methods in execution (due to system overload) future methods can be triggered after minutes and so the ticket state can be different from when the future was actually triggered.

For this implementation we will use a Callout__c Sobject with the following fields:

  • Case__c: master/detail on Case
  • Job_ID__c: external ID / unique / case sensitive, stores the job id
  • Send_on__c: date/time, when the callout has taken place
  • Duration__c: integer, milliseconds for the callout to be completed
  • Status__c: picklist, valued are Queued (default), OK (response 200), KO (response != 200) or Failed (exception)
  • Response__c: long text, stores the server response

Let’s start with the trigger:

    trigger CaseQueueableTrigger on Case (after insert, after update) {

    List calloutsScheduled = new List();
    for(Integer i = 0; i < Trigger.new.size(); i++){
        if((Trigger.isInsert || 
           Trigger.new[i].Status != Trigger.old[i].Status)
            && Trigger.new[i].Status == 'Closed' )
        {
            ID jobID = System.enqueueJob(new CaseQueuebleJob(Trigger.new[i]));
            calloutsScheduled.add(new Callout__c(Job_ID__c = jobID, 
                                                 Case__c = Trigger.new[i].Id,
                                                Status__c = 'Queued'));
        }
    }
    if(calloutsScheduled.size()>0){
        insert calloutsScheduled;
    }
}

The code iterates through the trigger cases and if they are created as “Closed” or the Status field changes to “Closed” a new job is enqueued and a Callout__c object is created.

This way we always have evidence on the system that the callout has been fired.

Let’s watch the job code

    public class CaseQueuebleJob implements Queueable, Database.AllowsCallouts {
    . . .
    }

The Database.AllowsCallouts allow to send a callout in the job.

Next thing is a simple constructor:

    /*
     * Case passed on class creation (the actual ticket from the Trigger)
     */
    private Case ticket{get;Set;}
    
    /*
     * Constructor
     */
    public CaseQueuebleJob(Case ticket){
        this.ticket = ticket;
    }

And this is the content of the interface’s execute method:

    
     // Interface method. 
     // Creates the map of non-null Case fields, gets the Callout__c object
     // depending on current context JobID.
     // In case of failure, the job is queued again.
     
    public void execute(QueueableContext context) {
        //1 - creates the callout payload
        String reqBody = JSON.serialize(createFromCase(this.ticket));
        
        //2 - gets the already created Callout__c object
        Callout__c currentCallout = [Select Id, Status__c, Sent_on__c, Response__c, Case__c,
                                     Job_ID__c From Callout__c Where Job_ID__c = :context.getJobId()];
        
        //3 - starting time (to get Duration__c)
        Long start = System.now().getTime();
        
        //4 - tries to make the REST call
        try{
            Http h = new Http();
            HttpRequest request = new HttpRequest();
            request.setMethod('POST');
            //change this to another bin @ http://requestb.in
            request.setEndpoint('http://requestb.in/nigam7ni');
            request.setTimeout(60000);
            request.setBody(reqBody);
            HttpResponse response = h.send(request);
            
            //4a - Response OK
            if(response.getStatusCode() == 200){
                currentCallout.status__c = 'OK';
            //4b - Reponse KO
            }else{
                currentCallout.status__c = 'KO';
            }
            //4c - saves the response body
            currentCallout.Response__c = response.getBody();
        }catch(Exception e){
            //5 - callout failed (e.g. timeout)
            currentCallout.status__c = 'Failed';
            currentCallout.Response__c = e.getStackTraceString().replace('n',' / ')+' - '+e.getMessage();
            
            //6 - it would have been cool to reschedule the job again :(
            /*
             * Apprently this cannot be done due to "Maximum callout depth has been reached." exception
            ID jobID = System.enqueueJob(new CaseQueuebleJob(this.ticket));
            Callout__c retry = new Callout__c(Job_ID__c = jobID, 
                                                 Case__c = this.ticket.Id,
                                                Status__c = 'Queued');
            insert retry;
            */
        }
        //7 - sets various info about the job
        currentCallout.Sent_on__c = System.now();
        currentCallout.Duration__c = system.now().getTime()-start;
        update currentCallout;
        
        //8 - created an Attachment with the request sent (it could be used to manually send it again with a bonification tool)
        Attachment att = new Attachment(Name = 'request.json', 
                                        Body = Blob.valueOf(reqBody), 
                                        ContentType='application/json',
                                       ParentId = currentCallout.Id);
        insert att;
    }

These are the steps:

  1. Creates the callout JSON payload to be sent (watch the method in the provided github repo) for more details (nothing more than a describe and a map)
  2. Gets the Callout__c object created by the trigger (and using the context’s Job ID)
  3. Gets the starting time of the callout being executed (to calculate the duration)
  4. Tries to make the rest call

    1. Server responded with a 200 OK
    2. Server responded with a non OK status (e.g. 400, 500)
    3. Saves the response body in the Response__c field
  5. Callout failed, so fills the Respose__c field with the stacktrace of the exception (believe me this is super usefull when trying to get what happened, expecially when you have other triggers / code in the OK branch of the code)
  6. Unfortunately if you try to enqueue another job after a callout is done you get the following error Maximum callout depth has been reached., this is apparently not documented, but it should be related by the fact that you can have only 2 jobs in the queue chain, so apparently if you queue the same job you get this error.
    This way the job would have tried to enqueue another equal job for future execution.
  7. Sets time fields on the Callout__c object
  8. Finally creates an Attachment object with the JSON request done: this way it can be expected, knowing the precise state of the Case object sent and can be re-submitted using a re-submission tool that uses the same code (Batch job?).

This is a simple Callout__c object on CRM:

And this is an example request:

{
    "values": {
        "lastmodifiedbyid": "005w0000003fj35AAA",
        "businesshoursid": "01mw00000009wh7AAA",
        "engineeringreqnumber": "767145",
        "casenumber": "00001001",
        "product": "GC1060",
        "planid": "a05w000000Gpig7AAB",
        "ownerid": "005w0000003fj35AAA",
        "createddate": "2014-08-09T09:54:17.000Z",
        "origin": "Phone",
        "isescalated": false,
        "status": "Closed",
        "slaviolation": "Yes",
        "accountid": "001w0000019wqEIAAY",
        "systemmodstamp": "2014-11-03T19:33:31.000Z",
        "isdeleted": false,
        "priority": "High",
        "id": "500w000000fqNRaAAM",
        "lastmodifieddate": "2014-11-03T19:33:31.000Z",
        "isclosedoncreate": true,
        "createdbyid": "005w0000003fj35AAA",
        "contactid": "003w000001EetwEAAR",
        "type": "Electrical",
        "closeddate": "2013-06-20T18:59:51.000Z",
        "subject": "Performance inadequate for second consecutive week",
        "reason": "Performance",
        "potentialliability": "Yes",
        "isclosed": true
    }
}

The code and the related metadata is available on this GitHub repo.

[Salesforce / Lightning] Loading scripts

a{ color: red !important;}

UPDATE

Due to the introduction of the ltng:require, this post is no more a valid solution. Refer to the official Lightning documentation.

This post has been more like a request for help, rather than a technical blog post, but it came to be an awesome way to see Salesforce community in action and ready to help!In the last Dreamforce 14 big Mark presented the Lightning framework for fast development of reusable components (see details here and the awesome Topcoder’s track).Click here for the “Lightning Components Developer’s Guide”, well written and clear.I’ve noticed a strange behavior regarding external javascript libraries loading.These are the requirements regarding external script loading:

  • You can only load external libraries got from a static resource
  • You cannot use the {!$Resource.resourceName} expression because we are not inside a VisualForce page, so you have to simply refer to “/resource/[resourceName]” in your <script> tags
  • From page 100: “If you want to use a library, such as jQuery, to access the DOM, use it in afterRender().

Apparently the last sentence is not true.The problem arose because I loaded jQuery + Bootstrap and sometimes (and randomly) the Bootstrap plugin did not load because of jQuery was not yet loaded: the cause was certanly the fact that libraries were not loaded sequentially!TLDR; click here for the solution!This is what I’m trying to do:

BlogScriptApp.app

<aura:application>
    <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
    <aura:handler event="aura:doneRendering" action="{!c.doneRendering}"/>
    <script src="/resource/BlogScripts/jquery.min.js" ></script>    
    <div id="afterLoad">Old value</div>
</aura:application>

BlogScriptAppController.js

({
 doInit : function(component, event, helper) {
        try{
   $("#afterLoad").html("VALUE CHANGED!!!");
            console.log('doInit: Success');
        }catch(Ex){
            console.log('doInit: '+Ex);
        }
 },
    doneRendering : function(component, event, helper) {
        try{
   $("#afterLoad").html("VALUE CHANGED!!!");
            console.log('doneRendering: Success');
        }catch(Ex){
            console.log('doneRendering: '+Ex);
            
            setTimeout(function(){
                try{
                    $("#afterLoad").html("VALUE CHANGED!!!");
                    console.log('doneRendering-Timeout: Success');
                }catch(Ex){
                    console.log('doneRendering-Timeout: '+Ex);
                }
            }, 100);
        }
 }
})

BlogScriptAppRenderer.js

({
 afterRender : function(){
        this.superAfterRender();
  try{
   $("#afterLoad").html("VALUE CHANGED!!!");
            console.log('afterRender: Success');
        }catch(Ex){
            console.log('afterRender: '+Ex);
        }
  
    }
})

This is what I get in the console

doInit: ReferenceError: $ is not defined
afterRender: ReferenceError: $ is not defined
doneRendering: ReferenceError: $ is not defined
doneRendering-Timeout: Success 

This means that in the last app event (aura:doneRendering) we don’t have the libraries loaded, and that the only way to do it is to detach from the current execution and use the “setTimeout” method to call asynchronously the needed code.No surprise that this could not work if the jQuery library took too long to loadOne of the suggestions was to use RequireJS on the app, but the problem is the same: if the external scripts are not loaded, the “require” method does not exists and you cannot load its configuration.In this case RequireJS would allow to load in the correct order all the libraries (for instance jQuery, than bootstrap, then another lib …), like in this example:

BlogRequireJSApp.app

<aura:application>
    <aura:handler event="aura:doneRendering" action="{!c.doneRendering}"/>
    <script src="/resource/RequireJS" ></script>    
    <div id="afterLoad">Old value</div>
</aura:application>

BlogRequireJSAppController.js

({
 doneRendering : function(component, event, helper) {
        try{
            helper.loadRequire(component);
            console.log('doneRendering: Success');
        }catch(ex){
            console.log('doneRendering: '+ex);
            setTimeout(function(){
                try{
                    helper.loadRequire(component);
                    console.log('doneRendering-Timeout: Success');
                }catch(ex){
                 console.log('doneRendering-Timeout: '+ex);
                }
            }, 100);
        }
 }
})

BlogRequireJSAppHelper.js

({
    loadRequire : function(component) {
        require.config({
            paths: {
                "jquery": "/resource/BlogScripts/jquery.min.js?",
                "bootstrap": "/resource/BlogScripts/boostrap.min.js?"
            }
        });
        
        require(["jquery"], function($) {
            require(["bootstrap"], function(bootstrap, chartJS) {
                $("#afterLoad").html("VALUE CHANGED!!!");
            });
        });
    }
})

This is what I get in the console

doneRendering: ReferenceError: require is not defined
doneRendering-Timeout: Success 

This way you’ll have the scripts loaded in the correct order usign the RequireJS library: anyway if the RequireJS library is not yet loaded (it depends by your data connection) you’ll see another exception at the end of the log

Here comes the Salesforce community!

I posted a question on the SF developer forums and got a super cool response.From that response I came up with a simple solution that uses dynamic script loading and one event fired: this solution is a simpler reinterpretation of the forum’s one to make it easier to understand.

BlogRequireJSDinamic.app

<aura:application>
        <aura:handler event="forcelogic2:BlogRequireJSEvent" action="{!c.initJS}"/>
        <aura:registerEvent type="forcelogic2:BlogRequireJSEvent" name="requireJSEvent"/>
        <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
        <div id="afterLoad">Old value</div>
    </aura:application>

BlogRequireJSDinamicController.app

({
    /*
        Sets up the RequireJS library (async load)
    */
    doInit : function(component, event, helper){
        
        if (typeof require !== "undefined") {
            var evt = $A.get("e.forcelogic2:BlogRequireJSEvent");
            evt.fire();
        } else {
            var head = document.getElementsByTagName('head')[0];
            var script = document.createElement('script');
            
            script.src = "/resource/RequireJS"; 
            script.type = 'text/javascript';
            script.key = "/resource/RequireJS"; 
            script.helper = this;
            script.id = "script_" + component.getGlobalId();
            var hlp = helper;
            script.onload = function scriptLoaded(){
                var evt = $A.get("e.forcelogic2:BlogRequireJSEvent");
                evt.fire();
            };
            head.appendChild(script);
        }
    },
    
    initJS : function(component, event, helper){
        require.config({
            paths: {
                "jquery": "/resource/BlogScripts/jquery.min.js?",
                "bootstrap": "/resource/BlogScripts/boostrap.min.js?"
            }
        });
        console.log("RequiresJS has been loaded? "+(require !== "undefined"));
        //loading libraries sequentially
        require(["jquery"], function($) {
            console.log("jQuery has been loaded? "+($ !== "undefined"));
            require(["bootstrap"], function(bootstrap) {
                console.log("bootstrap has been loaded? "+(bootstrap !== "undefined"));
                $("#afterLoad").html("VALUE CHANGED!!!");
            });//require end
        });//require end
    }
})

This is what I get in the console

RequiresJS has been loaded? true
jQuery has been loaded? true
bootstrap has been loaded? true 

What has happened?When the app loads (init event) the doInit function tries to understand if the RequireJS library has been loaded. If so fires a BlogRequireJSEvent event. If not yet loaded, dynamically creates a <script> tag with the path to RequireJS, attaching an onload handler, which in fact will inform that the library has been loaded with the same event.The same app is also an handler for the BlogRequireJSEvent event trhough the initJS function: it will load sequentially jQuery and Bootstrap libraries: this way you are pretty sure libraries will be loaded in the correct order.The next step in the solution given in the forums is to make a component that does all the work and fires an event that can be handled from anywhere in your app / components set.All the code above has been packaged into a GitHub repository.Enjoy!

[Salesforce] Practical guide to setup a LiveAgent

I’ve created a simple guide to set up a Live Agent on an ORG with a custom console app.
This is the link: Live_Agent_base_configuration.pdf .
This is a really simple guide believe me and no further explaination is given, beucase the only aim is to make Live Agent working.

Let me know what you think!

May The Force.com be with you!

[NodeJS + Salesforce SOAP WS] How to consume a Salesforce SOAP WSDL

I was wondering how to consume Salesforce WSDLs with nodejs.
I found Node Soap package (see npm) and I tried to consume a Partner WSDL.
Then I saved the WSDL in the “sf-partner.wsdl” file and played with the methods to get nodeJS speak SOAP with Salesforce.

var soap = require('soap');
var url = './sf-partner.wsdl';
soap.createClient(url, function(err, client) {
   console.log('Client created');
   console.log(client.SforceService.Soap); //all methods usable in the stub
});

If you try to console.log(client) you will see too much data

This is the output:

{ login: [Function],
  describeSObject: [Function],
  describeSObjects: [Function],
  describeGlobal: [Function],
  describeDataCategoryGroups: [Function],
  describeDataCategoryGroupStructures: [Function],
  describeFlexiPages: [Function],
  describeAppMenu: [Function],
  describeGlobalTheme: [Function],
  describeTheme: [Function],
  describeLayout: [Function],
  describeSoftphoneLayout: [Function],
  describeSearchLayouts: [Function],
  describeSearchScopeOrder: [Function],
  describeCompactLayouts: [Function],
  describeTabs: [Function],
  create: [Function],
  update: [Function],
  upsert: [Function],
  merge: [Function],
  delete: [Function],
  undelete: [Function],
  emptyRecycleBin: [Function],
  retrieve: [Function],
  process: [Function],
  convertLead: [Function],
  logout: [Function],
  invalidateSessions: [Function],
  getDeleted: [Function],
  getUpdated: [Function],
  query: [Function],
  queryAll: [Function],
  queryMore: [Function],
  search: [Function],
  getServerTimestamp: [Function],
  setPassword: [Function],
  resetPassword: [Function],
  getUserInfo: [Function],
  sendEmailMessage: [Function],
  sendEmail: [Function],
  performQuickActions: [Function],
  describeQuickActions: [Function],
  describeAvailableQuickActions: [Function] }

There is a quicker way to obtain this using the console.log(client.describe()) function, but this seems not to work with big WSDL like Salesforce ones (Maximum stack error)
The first move was to login to obtain a valid session id (using SOAP login action):

soap.createClient(url, function(err, client) {
    client.login({username: '[email protected]',password: 'FreakPasswordWithTkenIfNeeded'},function(err,result,raw){
      if(err)console.log(err);
      if(result){
          console.log(result.result);
    });
});

And this is the result

{ metadataServerUrl: 'https://na15.salesforce.com/services/Soap/m/29.0/00Di0000000Hxxx',
  passwordExpired: false,
  sandbox: false,
  serverUrl: 'https://na15.salesforce.com/services/Soap/u/29.0/00Di0000000Hxxx',
  sessionId: 'XXXXXXXXXX',
  userId: '005i0000000MXXXAAC',
  userInfo: 
   { accessibilityMode: false,
     currencySymbol: '€',
     orgAttachmentFileSizeLimit: 5242880,
     orgDefaultCurrencyIsoCode: 'EUR',
     orgDisallowHtmlAttachments: false,
     orgHasPersonAccounts: false,
     organizationId: '00Di0000000HxxxXXX',
     organizationMultiCurrency: false,
     organizationName: 'Challenges Co.',
     profileId: '00ei0000000UM6PAAW',
     roleId: {},
     sessionSecondsValid: 7200,
     userDefaultCurrencyIsoCode: {},
     userEmail: '[email protected]',
     userFullName: 'Admin',
     userId: '005i0000000MxxxXXX',
     userLanguage: 'en_US',
     userLocale: 'en_US',
     userName: '[email protected]',
     userTimeZone: 'Europe/Rome',
     userType: 'Standard',
     userUiSkin: 'Theme3' } }

Now the problem was to put the new endpoint and the session id or the next call, and this is the solution:

  //sets new soap endpoint and session id
  client.setEndpoint(result.result.serverUrl);
  var sheader = {SessionHeader:{sessionId: result.result.sessionId}};
  client.addSoapHeader(sheader,"","tns","");

And after that you can make wathever call you want:

      client.query({queryString:"Select Id,CaseNumber From Case"},function(err,result,raw){
          if(err){
            //console.log(err);
            console.log(err);
          }
          if(!err && result){
            console.log(result);
          }
      });

The result var will have all the data you expect from the SOAP response:

{ result: 
   { done: true,
     queryLocator: {},
     records: 
      [ [Object],
        [Object],
        [Object],
        [Object],
        [Object],
        [Object],
        [Object],
        [Object],
        [Object] ],
     size: 26 } }

The fact that you cannot call the client.describe() force us to read the WSDL to know which parameters send to the call.

Page 24 of 24

Powered by WordPress & Theme by Anders Norén