When Salesforce is life!

Tag: API

API-led integration basics with Mulesoft

This guest post has been delivered by Sachin Kumar, who works as a content writer in HKR Training and have a good experience in handling technical content writing and aspire to learn new things to grow professionally. He’s an expertise in delivering content on the market demanding technologies like ServiceNow, Mulesoft, cyber security, Robotic process automation  and more.


This blog is intended to provide you with the basic API integration skills with MuleSoft. If you’re new to MuleSoft’s Anypoint Platform or curious about the latest product developments, then learn how to connect cloud, on-premises, and hybrid environments by connecting apps, data, and devices.

Get this MuleSoft Training course which aids you in mastering the skills of MuleSoft.

According to Forrester, for every $1 spent on MuleSoft today, you will receive $5.45 in return over the next three years.

You will be learning the below topics which are covered to understand the API integration with MuleSoft:

  • Designing and developing APIs and the integrations at the speed of light.
  • Using a single runtime for On-premises and any cloud deployment.
  • Managing and gaining visibility in real-time and fast troubleshooting with one interface.
  • Ensuring threat protection and automated security at every layer.

1) Designing and Developing APIs and the integrations at the speed of light

Anypoint Creation CenterTM provides you with the tools you require to create connectors, implement application and data flows, and make API designing, reusing, and testing much easier.

Specifications for API design

Employing a web-based interface, make and document APIs. With OAS/RAML, you can quickly create API specs. Testing and validating APIs using a mocking service.

Develop integration flows

Use a visual interface for moving, synchronizing, or modifying data. With an automapper of machine learning, using the auto-populated assets for transforming data.

Connecting the APIs and the integrations

APIs and the integrations should be built, tested, and debugged using graphs or XML. complex data is transformed and mapped, and custom connectors are developed.

What does the Design Center of Anypoint allow you to do?

Rapid APIs Designing

Simply define the appropriate reply for a resource in the web-based editor to produce API specs in RAML or OAS. Security schemas and Data models can be reused as API pieces, and documentation can be produced at the same time. With a simple click, you will be publishing your APIs to an Anypoint Exchange for others to explore and reuse.

Connecting any system

Use a desktop IDE or web interface to connect systems. Use pre-built connectors or use our SDK to create your own. When dealing with visual errors, you can find and repair issues while designing.

Real-time mapping of your data

With DataWeave, our expression language, you can be querying, normalizing, and transforming any type or data amount in real-time. Use machine learning-dependent suggestions to speed up data mapping.

Testing and deploying applications

Use MUnit for testing integrations, mule’s unit, and integration testing framework. In CI/CD environments or locally, automate the tests. Using a single click deploy the applications.

2) Using a single runtime for On-premises and any cloud deployment

The deployment of the Mule application is driven by two key factors:

  • An instance of the Mule runtime engine.
  • Deployment of Mule applications to that instance of a Mule.

When you deploy apps to Anypoint Runtime Fabric or CloudHub, the Mule runtime engine instances required to run the applications are handled by these services.

You are responsible for installing and configuring the Mule runtime engine instances that execute your Mule applications when you deploy them on-premises. Since you have complete control over the on-premises instance (unlike Runtime Fabric deployments or CloudHub), you must be aware of the features unique to on-premises deployments.

Using One Mule Instance To Run Multiple Applications

Mule runtime engine can execute several apps from a single instance, allowing you to use the same namespaces in different applications without colliding or sharing information, which has further benefits such as:

  • A complicated application can be broken down into numerous Mule applications, each with its logic, and then deployed in a single Mule instance.
  • Domains allow you to exchange configurations across several applications.
  • Applications can rely on different versions of a library.
  • The same instance of a Mule can run multiple application versions.

3) Managing and gaining visibility in real-time and fast troubleshooting with one interface.

Comprehend your application network health with full API management lifecycle and governance of enterprise integration. Use API gateways for access controlling and unlocking data using custom or pre-built policies.

Reduce mean resolution time with a single view of the management of application performance, logging, and metrics for business operations. Monitor business-critical initiatives with customized dashboards, API functional testing, and alerts.

Using one platform for Governing the full API lifecycle

From development to retirement, handle APIs with the ease of a product that is unified.

  • Automate the production of API proxies or the deployment of gateways.
  • Configure pre-built or custom policies, and change them at runtime without downtime
  • External Identity Providers and Tiered SLAs are used to customize access.

Most Efficient Monitoring And Troubleshooting Deployments 

To achieve efficiency and uptime needs, get a holistic view of your APIs and integrations.

  • Using real-time alerts for detecting the issues proactively.
  • Correlate warning indications to determine the root cause.
  • To limit the impact of outages, disclose hidden dependencies inside deployments.

Analyzing The Metrics Across Every Deployment

Unveiling the deeper insights for supporting your business.

  • Using customized dashboards for translating IT metrics into business KPIs.
  • Using the detailed consumer metrics for enhancing the API engagement.
  • Capturing the trends via detailed, visual reports.

4) Ensuring Threat Protection And Automated Security At Every Layer

Anypoint SecurityTM protects your APIs and integrations with sophisticated defense. Protect and regulate your application network by protecting critical data, stopping threats at the edge, and automatically enforcing security best practices.

Establishing the smart and secure perimeters

Defining the Edge gateways with threat-blocking capabilities that harden overtime via feedback loops.

Protect sensitive data

Protect sensitive data in transit by automatically detecting and tokenizing it. 

Embed security by default

Enforce global policies, use best practices throughout the API lifecycle, and keep an eye on compliance.

What can Anypoint Security do for you?

Edge security

Create layers of defense with enterprise-grade Edge gateways that can be quickly configured. Using policy-driven choke points that can be established in minutes, protect against denial of service (DoS), content, and OWASP Top 10 threats.

Automatic hardening

Integrate Edge and API gateways to automatically detect API threats, escalate them to a perimeter, and update protections to remove vulnerabilities. Improve security by implementing a learning system that adapts to new threats.

Detection of sensitive information (coming soon)

Receive notifications when API payloads contain sensitive data like PII, PHI, or credit card information. With prebuilt monitoring dashboards, you can streamline governance and auditing.

Automatic tokenization

With a simple, format-preserving tokenization solution that secures sensitive data while enabling downstream dependencies, you can meet compliance requirements faster.

Policy Automation

Ensure that policies are followed consistently across all environments, check for compliance with policies that have been deployed, and empower API owners to detect out-of-process changes and address violations, bridging the gap between DevOps and security teams.

Access Standardisation

Establish standard authorization and authentication API patterns and make them available as fragments to encourage reuse rather than building new, potentially insecure code.

Conclusion:

In this blog, we have learned features like how to design and develop APIs and integrations quickly, deployments of any cloud and on-premises with a single run-time, quick troubleshooting and real-time visibility management with one interface, and threat protection and automated security at each layer with Mulesoft.

[Salesforce / Amazon Echo] AlexForce 2.0: integrate Salesforce and Alexa (the ultimate Apex library)

More than 2 years ago I wrote about a library I made up for integrating Salesforce and Amazon Echo, using its REST APIs and Apex: this is the original post.

I supported the library for a while hoping that the Ohana could took ownership of it but unfortunately this didn’t happened.

With great surprise I met the next guest blogger, Harm Korten, who was developing his own version of the AlexaForce library.

I’m more than happy to give him place to his amazing library and hope that the time is now ripe to bring this library to the big audience!

Harm Korten is a Force.com fan from The Netherlands. His professional career in IT started in 2001 as a developer, but his interest in computers started well before that. He got introduced to Salesforce in 2005, working at one of the
first Dutch Salesforce.com partners, Vivens. He has been a Salesforce fan and advocate ever since.
Over the years, he has worked on countless Salesforce projects, at dozens of Salesforce end-user customers. Currently he is active at Appsolutely, a Dutch Salesforce partner, founded in 2017.
Find him on LinkedIn or follow his Salesforce (and other) adventures on his blog at harmkorten.nl.

Introduction

In the first week of 2018, I ran into some of Enrico Murru’s work. Google offered his AlexaForce Git Repo (https://github.com/enreeco/alexa-force) as a suggestion to one of my many questions about integrating Amazon Alexa (https://developer.amazon.com/alexa) with Salesforce. It turned out Enrico had been working on this same thing, using the same technology stack, as I was at this moment.

An Alexa Skill SDK in APEX, only 2 years earlier!

Nerd reference
Up until this moment, besides Enrico’s proof of concept version of such an SDK, the only available technology stacks that would allow integration between Salesforce and Alexa were the Node.js and Java SDK’s. These could be hosted on Heroku and use Salesforce API’s to integrate.

Like Enrico, I wanted to build an on-platform (Force.com) Alexa Skill SDK. This common interest put us in contact. One of the results is this guest blog, not surprisingly, about AlexaForce. Not Enrico’s AlexaForce, but Harm’s AlexaForce. We apparently both came up with this very special name for the SDK (surprise, surprise) 😉

AlexaForce

The basic idea about this Force.com SDK for Alexa, is to remove the necessity to work with Salesforce data through the Salesforce API. The Java or Node.js approach would have Amazon send requests to Heroku and from therefore require API communication with Salesforce.

With the AlexaForce SDK, Amazon will send the Alexa requests straight to Salesforce, allowing a developer to have full access to the Salesforce data, using SOQL, SOSL and APEX. The resulting architecture is depicted on the image below.

For more information about AlexaForce and how to use it, please visit https://github.com/HKOLWD/AlexaForce. You will find code samples and a detailed instruction there. For this article, I will elaborate on a specific Alexa Skill design approach, which is still in beta at Amazon: Dialogs.

Dialogs

Generally spoken, the most important part of an Alexa Skill, is its Interaction Model. The Interaction Model is defined in the Amazon Developer Portal when creating a new skill. The model will determine how comprehensive your skill will be as well as its user-friendliness, among other things.

An Alexa Skill model generally consists of Intents and Slots. The Intent holds what the user is trying to achieve, the Slots contain details about the specifics of the user’s intention. For example, the Intent could be ordering a pizza, the Slots could be the name and size of the pizza, the delivery location and desired delivery time.

One could build a model that just defines Intents, Slots, Slot Types and some sample utterances. This type of model would put a lot of the handling of the conversation between Alexa and the user in your (APEX) code. Prompting for information, checking and validating user input etc. would all be up to your code.

Here’s where Dialogs come in handy. With a Dialog (which is still in beta at the time of this writing) you put some of the conversation handling inside the Interaction Model. In other words, besides defining Intents, Slots and Utterances, you also define Alexa’s responses to the user. For example, the phrase Alexa would use to ask for a specific piece of information or how to confirm information given by the user.

From an AlexaForce perspective, you could simply tell Alexa to handle the next response using this Dialog definition inside the Interaction Model. This is done by having AlexaForce send a Dialog.Delegate directive to Alexa.

Example

Imagine an Alexa Skill that takes support requests from the user and creates a Case in Salesforce based on the user’s request, a ServiceRequest (Intent) in this example.

Two important data points (Slots) need to be provided by the user:

  1. The topic of the request. Represented by ServiceTopic in this example.
  2. The description of the issue. Represented by IssueDescription in this example

A Dialog allows you to have Alexa collect the data points and have them confirmed autonomously. The APEX keeps delegating conversation handling to Alexa until all required Slots have been filled.

A Dialog has 3 states, STARTED, IN_PROGRESS and COMPLETED. When COMPLETED, you can be sure that Alexa has fully fulfilled the Intent as defined in your model, including all its required Slots. Below is a code sample that would implement this, returning true on Dialog completion.

if(req.dialogState == 'STARTED') {
    alexaforce.Model.AlexaDirective dir = new alexaforce.Model.AlexaDirective();
    dir.type = 'Dialog.Delegate';
    dirManager.setDirective(dir);
    return false;
} else if(req.dialogState != 'COMPLETED') {
    alexaforce.Model.AlexaDirective dir = new alexaforce.Model.AlexaDirective();
    dir.type = 'Dialog.Delegate';
    dirManager.setDirective(dir);
    return false;
            
} else {
    return true;
}

The APEX takes over again when Alexa sends the dialog state ‘COMPLETED’. Once this happens, both the ServiceTopic and IssueDescription will be available (and confirmed by Alexa) to your APEX to create the Case.

This example would be even more powerful if you set up account linking. This would allow users to first log in to Salesforce (e.g. a Community) and therefore providing the developer with information about the Salesforce User while creating the Case.

All of the code for this example, including the model and full APEX can be found here: https://github.com/HKOLWD/AlexaForce/tree/master/samples/Dialog.Delegate.

[Salesforce / REST APIs] API Request limit on REST header response

Plyaing with Salesforce APIs for a super secret project I came across at this (the red one) response header:

{
    date : "Fri, 31 Jul 2015 22:20:47 GMT",
    set-cookie : -[
       "BrowserId=8gV_vxxxT--xxxi0bg0vug;Path=/;Domain=.salesforce.com;Expires=Tue, 29-Sep-2015 22:20:47 GMT",
    ],
    expires : "Thu, 01 Jan 1970 00:00:00 GMT",
    sforce-limit-info : "api-usage=113/5000000",
    last-modified : "Fri, 24 Jul 2015 12:07:09 GMT",
    content-type : "application/json;charset=UTF-8",
    transfer-encoding : "chunked",
}

This is basically the number of API requests done since this last call, and it can be used to train external systems not to fail when exceeding this limit.

I’m sure this has been there since the beginning of times and it may be documented and, as a Salesforce expert, I should know it, but it is worth an article!

Only thing to know is that the limit is not precise, this is recalculated asynchronously and thus the number may be not the exact current amount.

To test this try my own REST request utility and try to call a simple global describe:

     GET https://[your_instance].salesforce.com/services/data/v34.0/sobjects/
     Headers
     Authorization: Bearer [a_valid_session_id]

Powered by WordPress & Theme by Anders Norén