Mulesoft è stato eletto da poco e per la nona volta Leader del mercato delle piattaforme di integrazione in iPaaS secondo Gartner.
Contestualmente è stato eletto il primo Mulesoft Ambassador italiano, e noi abbiamo l’onore di ospitarlo e raccontarci il suo viaggio che ha inizio da una promessa fatta ad Enrico qualche anno fa.
Unitevi a noi per scoprire di quale promessa si tratta.
Link Utili – Trovate tutti i links in questa Folder 😉
This guest post has been delivered by Sachin Kumar, who works as a content writer in HKR Training and have a good experience in handling technical content writing and aspire to learn new things to grow professionally. He’s an expertise in delivering content on the market demanding technologies like ServiceNow, Mulesoft, cyber security, Robotic process automation and more.
This blog is intended to provide you with the basic API integration skills with MuleSoft. If you’re new to MuleSoft’s Anypoint Platform or curious about the latest product developments, then learn how to connect cloud, on-premises, and hybrid environments by connecting apps, data, and devices.
Get this MuleSoft Training course which aids you in mastering the skills of MuleSoft.
According to Forrester, for every $1 spent on MuleSoft today, you will receive $5.45 in return over the next three years.
You will be learning the below topics which are covered to understand the API integration with MuleSoft:
Designing and developing APIs and the integrations at the speed of light.
Using a single runtime for On-premises and any cloud deployment.
Managing and gaining visibility in real-time and fast troubleshooting with one interface.
Ensuring threat protection and automated security at every layer.
1) Designing and Developing APIs and the integrations at the speed of light
Anypoint Creation CenterTM provides you with the tools you require to create connectors, implement application and data flows, and make API designing, reusing, and testing much easier.
Specifications for API design
Employing a web-based interface, make and document APIs. With OAS/RAML, you can quickly create API specs. Testing and validating APIs using a mocking service.
Develop integration flows
Use a visual interface for moving, synchronizing, or modifying data. With an automapper of machine learning, using the auto-populated assets for transforming data.
Connecting the APIs and the integrations
APIs and the integrations should be built, tested, and debugged using graphs or XML. complex data is transformed and mapped, and custom connectors are developed.
What does the Design Center of Anypoint allow you to do?
Rapid APIs Designing
Simply define the appropriate reply for a resource in the web-based editor to produce API specs in RAML or OAS. Security schemas and Data models can be reused as API pieces, and documentation can be produced at the same time. With a simple click, you will be publishing your APIs to an Anypoint Exchange for others to explore and reuse.
Connecting any system
Use a desktop IDE or web interface to connect systems. Use pre-built connectors or use our SDK to create your own. When dealing with visual errors, you can find and repair issues while designing.
Real-time mapping of your data
With DataWeave, our expression language, you can be querying, normalizing, and transforming any type or data amount in real-time. Use machine learning-dependent suggestions to speed up data mapping.
Testing and deploying applications
Use MUnit for testing integrations, mule’s unit, and integration testing framework. In CI/CD environments or locally, automate the tests. Using a single click deploy the applications.
2) Using a single runtime for On-premises and any cloud deployment
The deployment of the Mule application is driven by two key factors:
An instance of the Mule runtime engine.
Deployment of Mule applications to that instance of a Mule.
When you deploy apps to Anypoint Runtime Fabric or CloudHub, the Mule runtime engine instances required to run the applications are handled by these services.
You are responsible for installing and configuring the Mule runtime engine instances that execute your Mule applications when you deploy them on-premises. Since you have complete control over the on-premises instance (unlike Runtime Fabric deployments or CloudHub), you must be aware of the features unique to on-premises deployments.
Using One Mule Instance To Run Multiple Applications
Mule runtime engine can execute several apps from a single instance, allowing you to use the same namespaces in different applications without colliding or sharing information, which has further benefits such as:
A complicated application can be broken down into numerous Mule applications, each with its logic, and then deployed in a single Mule instance.
Domains allow you to exchange configurations across several applications.
Applications can rely on different versions of a library.
The same instance of a Mule can run multiple application versions.
3) Managing and gaining visibility in real-time and fast troubleshooting with one interface.
Comprehend your application network health with full API management lifecycle and governance of enterprise integration. Use API gateways for access controlling and unlocking data using custom or pre-built policies.
Reduce mean resolution time with a single view of the management of application performance, logging, and metrics for business operations. Monitor business-critical initiatives with customized dashboards, API functional testing, and alerts.
Using one platform for Governing the full API lifecycle
From development to retirement, handle APIs with the ease of a product that is unified.
Automate the production of API proxies or the deployment of gateways.
Configure pre-built or custom policies, and change them at runtime without downtime
External Identity Providers and Tiered SLAs are used to customize access.
Most Efficient Monitoring And Troubleshooting Deployments
To achieve efficiency and uptime needs, get a holistic view of your APIs and integrations.
Using real-time alerts for detecting the issues proactively.
Correlate warning indications to determine the root cause.
To limit the impact of outages, disclose hidden dependencies inside deployments.
Analyzing The Metrics Across Every Deployment
Unveiling the deeper insights for supporting your business.
Using customized dashboards for translating IT metrics into business KPIs.
Using the detailed consumer metrics for enhancing the API engagement.
Capturing the trends via detailed, visual reports.
4) Ensuring Threat Protection And Automated Security At Every Layer
Anypoint SecurityTM protects your APIs and integrations with sophisticated defense. Protect and regulate your application network by protecting critical data, stopping threats at the edge, and automatically enforcing security best practices.
Establishing the smart and secure perimeters
Defining the Edge gateways with threat-blocking capabilities that harden overtime via feedback loops.
Protect sensitive data
Protect sensitive data in transit by automatically detecting and tokenizing it.
Embed security by default
Enforce global policies, use best practices throughout the API lifecycle, and keep an eye on compliance.
What can Anypoint Security do for you?
Edge security
Create layers of defense with enterprise-grade Edge gateways that can be quickly configured. Using policy-driven choke points that can be established in minutes, protect against denial of service (DoS), content, and OWASP Top 10 threats.
Automatic hardening
Integrate Edge and API gateways to automatically detect API threats, escalate them to a perimeter, and update protections to remove vulnerabilities. Improve security by implementing a learning system that adapts to new threats.
Detection of sensitive information (coming soon)
Receive notifications when API payloads contain sensitive data like PII, PHI, or credit card information. With prebuilt monitoring dashboards, you can streamline governance and auditing.
Automatic tokenization
With a simple, format-preserving tokenization solution that secures sensitive data while enabling downstream dependencies, you can meet compliance requirements faster.
Policy Automation
Ensure that policies are followed consistently across all environments, check for compliance with policies that have been deployed, and empower API owners to detect out-of-process changes and address violations, bridging the gap between DevOps and security teams.
Access Standardisation
Establish standard authorization and authentication API patterns and make them available as fragments to encourage reuse rather than building new, potentially insecure code.
Conclusion:
In this blog, we have learned features like how to design and develop APIs and integrations quickly, deployments of any cloud and on-premises with a single run-time, quick troubleshooting and real-time visibility management with one interface, and threat protection and automated security at each layer with Mulesoft.
Another trailblazer joins Nerd at Work crew!
His name is Christian Tinghino and his first post is about a brand new addition to the Salesforce platform: MuleSoft.
He’s been helped by another awesome trailblazer, Ivano Guerini. Christian Tinghino is a Senior Salesforce.com Developer at WebResults, part of Engineering Group.
He started working in 2012, moving his first steps on the Salesforce.com platform in 2014 coding in Apex and Visualforce.
Since 2015 he works in WebResults, fully focused on the development of managed packages and Lightning components.
As all enthusiast developers, he’s fascinated by innovative, challenging and strategic solutions. Owns two Salesforce.com certifications, writes blog posts on bugcoder.it, and saves the world from time to time.
Ivano Guerini is a Salesforce Senior Developer at Webresults, part of Engineering Group since 2015.
He started my career on Salesforce during his university studies and based his final thesis on it.
He’s passionate about technology and development, in his spare time he enjoys developing applications mainly on Node.js.
Few days ago the great news: Salesforce signed an agreement to acquire MuleSoft, a company that provides integration software (link).
SAN FRANCISCO, March 20, 2018 PRNewswire — Salesforce (NYSE: CRM), the global leader in CRM, and MuleSoft (NYSE: MULE), the provider of one of the world’s leading platforms for building application networks, have entered into a definitive agreement under which Salesforce will acquire MuleSoft for an enterprise value of approximately $6.5 billion.
As Salesforce.com developers and nerds we are excited by these news… so me and my colleague Ivano felt we had to take a look at Mule ESB
Sample use case
For our tests, we want to migrate Salesforce accounts from an organization to another (Sf-to-Sf). Migrated records should dynamically receive the correct Record Type Id once in the destination org, in order to grant a correct mapping.
The flow should manage both existing and new accounts, inserting and updating records based on the presence in the destination org. For this reason, the support for the UPSERT operation is definitely a good thing.
Setup
Since we just want to evaluate the integration capability with Salesforce, we went with the on-premise Enterprise Edition (EE). This has a Salesforce connector that is not available in the Community Edition (CE). For the records, you can also choose a “Anypoint cloud” version.
Mule EE is delivered as Eclipse plugin, so you have to install the Java JDK, download and extract Eclipse. From eclipse, press Help > Install new software to add sites from that contain the runtime:
Some things never change: if you are/were a Java developer, you’ll feel comfortable with this procedure. Just install the EE runtime and Anypoint studio and you’re ready to create your Mule Project via the Eclipse interface.
When installed, a palette contains available components, connectors, transformers and so on. To use them you need to drag-and-drop it on the flow:
Step 1 – Start flow
Mule works with flows: sets of components, transformers and connectors used to fulfil an “integration need”. Components can communicate passing payloads, reading/writing flow variables accessible by other components in the same flow. You can create custom components and transformers using Java, Javascript etc. A session context is also present, but stores variables and information beyond flows executions.
Always start from the beginning: how the flow starts?
For our test, we want to use the HTTP Listener connector to trigger the flow: http://localhost:8081/start-flow
To to this, drag the HTTP component at the beginning of the flow:
Step 2 – Retrieve origin accounts
Mule automatically connects “blocks” (components) in a flow sequence, so you just need to put a block after another to build your flow.
Drag the Salesforce connector after the HTTP Listener, so that we can query Accounts from the origin org.
To connect with the org, we need to define a configuration. The cool thing is that once a connection is configured, you can reference it just using the Name:
To query accounts from the origin org, set the Salesforce component to execute a query operation, you can be supported by a query builder tool:
Then, we assign the query result (the component payload) to a flow variable called originAccounts, using the Variable component:
Step 3 – Retrieve destination Record Types
Define a different Salesforce configuration to connect to the destination Org (as for the step 2).
Then, drag again the Salesforce component to query Account Record Types, and then store the result in a flow variable. The procedure is similar to the step 2.
Step 4 – Accounts transformation
Now we have to map different fields and apply the correct Record Type ID. We can accomplish this by using custom code and different languages.
Honestly, I had problems with Javascript because of some data type incompatibilities on Iterators. Anyway, everything worked as expected with Java, so I created a class called CustomTransformer:
The class should extend the Mule AbstractMessageTransformer class, and override the transformerMessage method, storing the result into a flow variable. For example, our flow puts the ExternalCode__c field into the ExternalId__c field, reset some fields and apply the new RecordTypeId:
We can now proceed with the upsert operation on the destination org. We can thus use the previously configured credentials to perform the Upsert operation, defining the external id field.
Then with a combination of Foreach and Logger components it is possible to parse and inspect the upsert result in the Mule console log. After that, a transformation into a String allows to print the result to the HTTP listener page. This is not mandatory, but allows us to see how the flow run.
Done!
The full flow should look like this:
You can run a local Mule instance by pressing the “run project” button in Eclipse. To execute the flow, just open the HTTP URL defined in the step 1 and look at the upsert result directly from your page!