When Salesforce is life!

Tag: Salesforce Page 13 of 24

[Salesforce / VCS ] The team factor (or How a business analyst can affect the overall delivery speed)

In the previous post, we outlined a simple process, which did not solely focus on development but instead considered the path a feature takes from definition to production deployment. A, well, release process.

And although for us Salesforce developers it seems tons of unnecessary overhead, there is a reason, why multiple parties are involved: it’s a system of checks and balances to make sure that features are stable and according to end-user expectations.

Imagine you shop online for a TV and you get delivered a simple 23” monitor. It’s kind of similar, but not what you wanted. And although you can see a movie on it, it will not fit the use case you had in mind when ordering a kick-ass UHD supersmart ludicrously large television.

A process introduces the necessary structure for defining, developing, testing and delivering a feature so you can watch the world cup with your friends (no jokes about Italy please…).

We also introduced Copado as our release management platform in the last post, and we did not ditch it. It allows to unify and drive the collaboration between teams, and there are specific items, where I think Copado help teams to do their job.

But I want to take a step back further, because in order to know how a tool can make a process more efficient, we need to understand how each team in a project impacts overall delivery time. So instead of talking about technology, I want to focus on people in this post, describe the roles typically involved in a Salesforce implementation, their tasks and what each team member can do to improve the flow.

Excuse me, but: what exactly are you doing?

Let’s recap the process focusing on the tasks per team member:

Define a feature:

Working in an agile way, a feature is usually defined by client business stakeholders, ideally by the product owner. As there are a lot of features to be defined and also tested, the project setup includes one or more business analysts to support the product owner in story definition, documentation and follow up. Toughened in uncountable meetings and armed with knowledge about the business process they tend to become advocates for the business side and bounce ideas with developers to get a feature done in a way business will like it. We could spend a whole post on that topic, but for now: time for action.

Develop a feature:

Well, do magic. But stay in time and in scope. Oh, and please, make it as scalable and maintainable as possible. You know, the usual.

Peer or Lead Dev review and rework:

Regardless if you just do a face to face explanation or if you use a more structured approach, such as a pull request: having your development reviewed has too many benefits to skip it. It prevents you from introducing a bad design crippling the project in the long run for short term success. Insights are shared within the team so there are no surprises. And you become a better developer through feedback.

If you work with Git (which is surprisingly easy with Copado), a pull request can prevent you from introducing bad items in your feature branch, which at the end will result in easier deployments.

Deploy to QA:

Well, this can be easy or an endless pain. If you work with Git, follow the golden rule, and you should avoid most of the issues:
– Review your commits and never include other people changes as part of your branch. (See why the pull request comes in handy?)
– Less references → less potential deployment errors or conflicts.

Test a feature:

There are elements in this world, which are natural opponents bound in eternal struggle. Water and fire for instance, or cats and dogs. And of course:


(copyright: https://www.monkeyuser.com/2018/the-struggle/)

Yet, without testers, our features would be less robust and might not be according to business requirements. Or even worse, we could break existing functionality. And believe me, you rather prefer a testers feedback, than explaining to your project manager and business sponsor why you broke the internet. On larger implementations, you are more likely to have a QA team focussing on writing and reviewing tests for current developments and automating regression testing.

Finally, the approval of a feature, at least in agile, can only be performed by product owners or delegated business users.

They ordered a TV, so only they are entitled to approve they received one.

Teamwork in software development is a real thing

Great, now we know the main roles involved in a project. But how does that help us to improve our releases? I mean, those business analysts, they don’t do development or any Git stuff; just talking, meetings and powerpoints. How can they possibly impact delivery speed?

A lot.

Analysts: Define and conquer

User stories are an easy way to capture requirements. Right? Well, yes and no.

Yes.
Because the basic format is easy. For instance: as a user, I want to see how much is an account worth, so that I always know on what to focus during sales and service.

No.
Because it misses tons of details. How do you calculate an account’s worth? Where should you see it? In which moment? Do we need information from other systems? If so, do we need it in real time? This simple story could result in fetching the latest orders from the ERP system on account page load.

Here is where the business analyst would spend time with the client, guide them towards a reasonable solution and future iterations and document it as part of the acceptance criteria.

Regardless of your project framework and methodology, good acceptance criteria fulfill at least three criteria:

  • Provide process context.
  • Provide enough information for the developers to know what to do, so they can outline a design and estimate the effort.
  • Be precise enough to be testable. Who needs to log in? Where should you navigate? What action do you need to perform to get a result?

Ultimately, the time invested in documenting decent acceptance criteria will result in multiple benefits for other teams. For instance, less guesswork for developers during estimations and test scripts can be written in parallel with development.

As we use Copado as our release management solution, the User Story created by analysts will be used by developers to scope and deploy the changes. How neat is that? Definition, estimation and deployment all aligned.

Developers: A short cut for you but a long run for the team

After a feature is defined (estimated, prioritized and included in the sprint backlog) it is time to develop it. Done.
But is there anything a developer can do to increase process efficiency?

Just as the business analyst can impact overall speed by investing time in preparation, a developer can impact delivery to production investing time in working according to best practices and reviewing his or her work.

  • Documentation, for example, is something most devs dislike, however it is crucial for others to understand the overall design and making informed decisions when moving the feature towards higher environments.
  • Writing org-independent apex tests, which don’t rely on existing data. Just remember: every time you write @isTest(SeeAllData=true) a kitten dies with puppies crying and unicorns get sad.
  • Do not hardcode references to a specific org. Do that and the unicorn will fade away.
  • Check your feature branch, if you work with Version Control (Git). Always make sure that, whatever is committed in your feature branch includes your feature only. A peer or pull request review process can help to catch those wrongly committed items (e.g. reference to a field you don’t expect in a layout or a report type).
  • Make reviewing easy. How? Well, what about pasting the URL for feature definition and documentation in the pull request description? Takes less than a minute, but saves minutes in searching for the reviewer.
  • Perform a validation deployment towards the target org including running the test classes.
  • Write down pre or post deployment tasks in an understandable way, because it might be that less tech-savvy team members need to perform them.

So that’s it? Well, not entirely.

With the right solutions in place, such as Copado, you can even go further, and there is a good reason to do so: there’s more than one deployment.

In our setup, there are two deployments required to release a feature. One to QA and another to Prod.

But what if your pipeline has further orgs? Staging, for example, a hotfix org and multiple dev sandboxes? In such a scenario, each manual step needs to be executed multiple times for different orgs.

Luckily Copado has some fancy logic under the hood, which can increase the level of automation considerably, so that you don’t even need to worry about hard coded references (yes, indeed).

In the next post we will take a closer look on how this can be achieved, but just as a teaser, the magic words are: Deployment Tasks and Environment Variables. And of course: “Please”.

Test team: Heavy lifting – easy testing

In an ideal agile world business testers would just check the functionality against the acceptance criteria and say yes or no, but the reality paints a different picture. Client stakeholders are caught between user story definition workshops, maybe their ongoing non-project work, internal stakeholder management and testing. Therefore a dedicated QA team can be of great value to drive testing effort, where better preparation results immediately in a shorter test time.

As soon as a feature is in a QA environment the clock starts ticking, so the core objective of the QA team is to make sure, that everything is ready for client business stakeholders to test and approve the functionality (or reject it (-_-) ). Here are a few things the QA team can do to prevent a story being stuck in testing:

  • Ensure you have a test script indicating the steps to follow to achieve a certain result. If acceptance criteria are well documented they will come in handy to work on them in parallel to story development.
  • Get the test script steps approved from a peer, who ideally dry-runs it on the dev org if the feature is available.
  • Make sure testers have access to the org with the appropriate permissions to execute the test. Just in case, because who would miss something that obvious, right?
  • Dry run tests in QA org, to avoid any surprise. Not required, but highly recommended.
  • Identify a test owner who will perform the test, with the help of business analysts and/or product owners.
  • Chase and support business for and during testing. Make sure they are notified and co-ride tests to support their efforts.
  • Chase and support developers if issues are found and a fix is required.

If you think: “That sounds like a lot of documentation, tracking, monitoring, and coordination.”, you are completely right and tooling can help here which in our case it is an easy thing. We defined the user story in Copado with all the required agile information, then developers used it to commit and deployed their changes to QA. The test team can use the same User Story record to define their scripts and track execution. Nice.

Release Managers: Connect the dots to align

Analysts document, developers work according to best practices, testers prepare. So, what can release managers do to move features faster to prod? Apart from the obvious tasks this role implies (e.g. helping teams to fix errors and resolve conflicts) release managers are owners of the overall release process, the technology enabling it and as a result they need to run continuous improvement efforts. Sounds like lean management? Absolutely. Reducing errors, increasing automation and avoid waste (including time) are key lean principles – and still valid, although we produce software instead of cars (or TVs).

Ok, let’s say I’m a release manager. What do I need to do?

  • Increase tooling knowledge. Regardless of which tool you use, you can only apply what you know, and being aware of what a tool can do is your foundation.
  • Monitor and analyze the process. What are the steps done by the team? How long they usually take? Is there anything which you might have missed?
  • Analyze challenges. Some deployments are easy, while others may take ages. List them down and investigate the root cause.

Once you have a good overview you can start applying your knowledge to tackle the issues:

  • Identify and implement automation. Even smaller improvements add up. For instance, we could fire a message to the assigned tester as soon as a story is in QA and the script is approved. With Copado this can be done easily with a process builder.
  • Document solutions. Usually deploying Salesforce metadata is easy. Select, deploy, hope, done. Yet some metadata types, such as profiles, standard picklist values or processes have their own tweaks to them. A good documentation of best practices and how to handle specific situations will enable teams to avoid pitfalls.
  • Listen to the team. Yes, people complain – often with a valid reason. So although nobody likes to get complaints, resolving them will lead to improvements.

Because in Copado your overall flow is aligned to User Stories and you have the power of the force.com platform on your fingertips you can tinker around with your Copado installation to provide you the information required. With process builder, you can set up time stamps on the User Story object to analyze the time spent per step and identify bottlenecks. You can even make dashboards and share them with your team.

And if you don’t have a dedicated place where you store process documentation, just create a new object to document solutions, and another one to hand in suggestions (which you later can take as input for User Stories).

Technology does not implement technology. People implement technology.

With all the innovation, features and what not being released each week, we sometimes forget that at the end of the day you work with people. You might like one more than another, however everyone who participates in an implementation is bound to a common goal. There is real value in working together, communicating, helping each other at the cost of being nice. Even to testers.

And once you got your team mojo going, the next logical question is: How to make it faster?

Well, this is the moment when tooling is back on the main stage. Copado has some nice features on how to automate steps in your process and in the next post we will take a detailed look at how to set it up so you spend less time with clicking around and more time with the team.

And watching the FIFA world cup on a kick ass UHD super smart ludicrously large television.

[Salesforce / IoT] Let’s play the game with Salesforce IoT (part 3): Heroku IoT platform

 

In the previous post we have setup everything needed on the Salesforce side to configure the IoT Explorer.

Before jumping on the Arduino Nutellator 3000 project, we have to create a proxy that will transform data received from the devices in order to push them to our beloved CRM.

For those, like me, who are the typical TL;DR developers, head over the GitHub repository and have a look at this simple NodeJS project.

What do we need?

  • A Salesforce Connected App
  • An Heroku dyno to host the proxy (our own IoT Platform)
  • An Heroku Postgres Database (free tire) to handle basic authentication

What will this app do?

The main job of this app is to create Platform Events filled with the data that comes from the Nutellators and send this events to the IoT Explorer to finally trigger the orchestrations.

N.B. If the last sentence is totally obscure to you, please go back to the first post of this series to learn more about Platform Events and how they are related to Salesforce IoT.

Configure a connected app on the CRM

To send a Platform Event using the REST APIs, you’ll need a Connected App.

To create a new one go to Setup > Apps > Connected Apps and create a new app:

We won’t be needing the callback URL since (so put a protocol://fake formatted url) we’ll be using Username-Password OAuth Flow (more details here).

From this app you need the following info to properly configure the Heroku app:

Next thing you need your user’s username/password/token to complete the Oauth process.

Setup Heroku

Create a new Heroku app (let’s say iot-nutellator.herokuapp.com).
Fork the salesforce-iot-nutellator-proxy repository on GitHub.

Click on the Deploy tab and link the forked Github repository:

Do not deploy the code right now.

Add the Heroku Postgres add-on and set everything up

Jump to the Resources tab and look for Heroku Postgres addon (choose the free tier):

Let’s put some settings

Before deploying the code from GitHub we have to setup few settings on the Settings tab:

You should have seen the DATABASE_URL already there: this is the url to access the Postgres database.

Deploy

Go back to the Deploy page and hit the Deploy branch button:

Last action to make is executing the DB initialization code that creates a iot_user table with a single user called “user” with password” pass: this user (and any other you decide to add) will be used to authorize every request from the device using basic authentication.

To execute the script simply use the Heroku console:

Test it out!

Open the app on your browser using https://your-app-name.herokuapp.com.
If everything is ok you should see this message:

Salesforce IoT Proxy 
© Enrico Murru - blog.enree.co 2018

Anatomy of the proxy

The Express JS server exposes 2 different routes:

  • GET /: doesn’t actually do anything
  • POST /api/level: this is the route used by the devices to send their data

A typical call would be so formatted:

POST /api/level

Headers
Authorization: BASIC BASE64(user:pass)
Content-Type: application/json

Body
{
    "level": 30,
    "device_id": "T1-C0DE-IRI"
}

Where device_id is the Devide Id that we’ve seen in the Nutellator Salesforce object and level is the nutell-level of the device in %.

The result of such a call is an update triggered by the orchestration on the targeted device:

In the next and last post we’ll be closing the post series by having fun with Arduino and a Nutellator 3000 project.

[Salesforce / VCS] Develop VS Deliver Features in Salesforce

A devs life could be so easy…

Developing a feature in Salesforce is easy, right?

  • Log into your org
  • Use (mostly) some point &bmp; click methods to enhance logic, user interface or the data model
  • Done

Sounds like it is developed.

But it is not delivered.

Although the feature is technically done, it is not available for end users in the production environment. Also, nobody tested if it fits the business requirements or if it breaks existing functionality.
In addition, maybe someone should take a look at your feature, if it is aligned with the overall solution design.
Oh, and in order to minimize business impact, deployments to production may be restricted to specific time windows per week.

Sounds like we as a team should follow a process to release features in a controlled way.

This is how we roll

There are a variety of processes for release management out there as each team is individual, but usually they structure a series of quality gates in a flow.
Taking the example from above, the high-level process would probably look like this:

So far so good, but now we have a challenge.

Production deployments can only be done at certain moments so what happens, if one feature is tested and ready to go and another one is still being reworked, and both share components such as an Account Layout?
Oh, and we want to have a backup of our metadata (not only classes) to be able to roll back, in case we have an issue after deployment?

It would be great, if we could work in a way that tracks changes over time and allows to release specific versions of our metadata.

Git for the save.

As described above, developing on the Force.com platform can be very straight forward. But apart from Flows and Process Builder, old versions are lost, once you save your changes, e.g. Classes, Formulas, Validation Rules or Layouts.
To avoid that, you can store local copies of your metadata by retrieving it (e.g. through Workbench or ANT Migration Tool). You can also deploy the retrieved items. So we could use that to account for our prod deployment, but that sounds like a lot of effort to manage those local files and versions.

Here is where a Version Control Solution (VCS) comes in handy. And by the frequency VCS is being mentioned, it has become an important pillar for working with Salesforce. There are several solutions out there (SVN, Mercurial), but as of now Git can be considered industry standard.

So instead of storing retrieved items on our hard drive using different names and folders for tracking versions, we can simply store them in our Git repository, which will track changes. This will allow us to go back to an earlier moment for rolling back changes or deploy a specific version from the past.

That escalated quickly. Can it be easy again?

Let’s take a step back.

What started out as an easy way to build valuable business features, suddenly sounds somewhat complex. Being able to roll back, having quality gates in place, all those are valid points, but now as a developer apart from creating functionality and work peer and QA feedback, you also need to do something with ANT or Workbench then storing it in Git and then deploy it?
Is there an easy way to do this?

Yes, Copado.

To get started, you need to download it from the AppExchange or the Copa.do website. Also, as the goal is to work with version control, get a free Git repo from GitHub or Atlassian/Bitbucket.
Next you need to connect Copado to your Salesforce environments (Dev, QA and Prod in this case) and set it up with the Git repository. There is a quick-start guide you can follow with links to additional documentation. While you set up Copado, you notice that it is natively build on the force.com platform. So your knowledge about Salesforce is all you need to modify it (This will be important later, so keep it in mind).

Once the setup is done, the process described out above using Git version control as source of truth would be the following with Copado:

Define feature in Copado

Assuming most Salesforce implementations are done in some form of Agile, it can be done in Copado directly, including all required information, such as Sprints, Epics, Acceptance Criteria or Story Points (click here for more details).

Scrum Masters and Analysts can use the Work Manager and Kanban Board to manage stories, roadmaps and sprints.

Develop feature in your environment

Let’s get to the part we like: get creative on how to solve the business issue in Salesforce. This one is easy indeed.

Perform a peer review in your environment

This is done between developers, however, we would like to document the results with a flag to mark the story as “Review Passed”.

Here is when the catchphrase “native force.com” turns into a benefit.
Just create a Checkbox on the Copado User Story object called “Peer Review Passed”, make it available for the required User Profiles and put it on the User story layout. Done*.

*: Wait, you work in Production directly? You can use Copado to deploy this modification.

Deploy to QA

So far so good, let’s go ahead and deploy. Scared?

Just click on “Commit Changes” on the Copado User Story, select your items (use column search and sorting to make your life easy), provide a message and finish your commit.

Back on the User Story page, check “Promote & Deploy”* and the following will be done by Copado**:

  • Create a feature branch
  • Retrieve the items you selected
  • Commit the items you selected on the feature branch
  • Create an intermediate Promotion Branch merge your feature branch on it (more info on the branching strategy can be found here)
  • Perform the deployment using Git as source
  • You can review your selections on the story, and click on the “View in Git” links to quickly navigate to your repository.

    ** bonus points if you click on “Validate” to make sure you can deploy

    Test user story

    Once the story is deployed to the next environment, it will be visible on the User Story page and we can change the status to “Ready for testing” and notify the Test Team through chatter.

    If you are thinking “Wait, this just a record update in Salesforce and it could be automated easily”, you are completely right! Wait for the upcoming blog posts.

    As soon as the test team approves the story, they can set the status to “Complete”.

    Deploy to Production

    Testing is done, and we can move to Prod. But wasn’t there something about other stories modifying the same component and them not being ready?
    Well, this is the beauty of version control. Copado will pick the feature branch contents for deployment and those did not change. Your story is independent and you can work in a true Agile way.

    Check “Promote & Deploy” again.

    Done.

    That’s it. That’s all?

    Well, not exactly. The tool offers tons of functionality which can make your life easy, such as the way profiles are trimmed and deployed with Git, an engine to remove (or replace) unwanted tags from xml files, modules for recording and automating testing, and the easiest way to handle Salesforce DX you have ever seen. You can even launch internal Copado logic through Process Builder!

    Check out on their demos or browse a little the documentation to get an overview of what is possible.

    We, however, will leave technical feature descriptions aside and focus at improving our process, as there are elements which will need to be tackled to get your team closer to smooth releases.

    • You’ll never work alone, so how to improve releases by working as a team?
    • Deploying with a simple click is maybe too easy. Can we implement quality gates?
    • Those are too many clicks. Can we automate this?

    Look out for the next post, where we will take a closer look at the involved team members and how Business Analysts can play a key role reducing the time required to release a feature.

[Salesforce / IoT] Let’s play the game with Salesforce IoT (part 2): Setup an Orchestration

Here we go with the second part of the Salesforce IoT playground post.

Read the part 1 post before going on to get the whole context.

Let’s start playing with the Salesforce IoT Explorer Edition by enabling it on the Setup:

The Salesforce IoT platform gives you the means to build business processes with point-and-click: you are still able to create triggers and complex logics, but now you have a simple tool that can be used to orchestrate your IoT information flow (aka Nutellator refills).

As you can see from the image above, new setup items apper:

  • Contexts: the configuration that matches platform events (Nutelleven__e) to Salesforce objects (Nutellator__c)
  • Orchestrations: this is the logic you build up to create powerful state machines that represent your Nutellators’ live state
  • Usage Data: lists all the orchestrations running on your Org

Let’s put all together

The Salesforce IoT Explorer flow is quite simple and works this way:

  1. IoT data comes from outside in form a HTTP post requests using REST APIs and Platform Events: data can be transformed and modified using Apex Triggers
  2. Each event is then coupled with actual Saleforce Sobject, the contetual data
  3. The context is then processed with Orchestrations which define state machines and translates state changes into business actions
  4. We have now Salesforce actions that can do whatever effect you want (external integrations, Sales / Service actions, …)

The Context

We have already defined the Nutellevent__e platform event and the corresponding Nutellator__c Sobject.

To couple them we should create a new IoT Context from Setup > Salesforce IoT > Contexts > New Context:

After the Context is created, let’s configure what’s inside. Click on Edit > Add Event data:

Select the main event and then the reference device ID that will be source to identify the context objects:

Now click on Add Reference data to complete the configuration:

Select the Nutellator Sobject and the unique field that represents the device:

Complete by hitting Save and then Done.

We are now able to link IoT data to Salesforce objects, such as knowing which device is sending data and which customer (Account) it belongs so.

Our first orchestration

It’s time to add logic to our IoT data to support our refill service.

Click on Setup > Salesforce IoT > Orchestrations > New Orchestration:

Our orchestration is meant to monitor the Nutella levels on the devices, so we are expected to have 3 different states:

  • Normal level: level is way above the minimum nutel-level, nothing to do here!
  • Warning level: we have reached a warning level, we have enough Nutella but it’s about to drop under the danger threadshold. This state is particularly important if we have different Nutellator for a given customer, and the support technicians could pro-actively refill more than one device when only one is under the danger level
  • Empty level: this is the danger level. From now on the Nutellator could finish soon its delicious sauce: it’s time to act not to let the customer without happiness.

To create a new state simply click the + Add State button (don’t worry if your colors differs from the following picture, I had to create and delete more states to match the idea behind the warning and danger ideas):

Switch to the Variables tab and let’s create the warning and empy threadsholds:

These variables should be used to evaluate the level received in the IoT data and to make the proper state transition. Let’s go back to the Rules tab and create a new rule for each state:

From NormalWarning when the level drops under the warning level, from Warning jump to Empty when the level drops under the empty level and finally when in Empty if the level rises up we’ll have a state transition to Warning (which can finally go to Normal.
We are using the Warning level as a “proxy” state of transition between the normal and danger states.

Click on the States tab to have a look at the state machine:

Let’s add some more actions.
When a new event comes, we want to keep track of the level on the Nutellator__c object as you can recall from the Nutellator tab on our Nuterllator 3000 app:

We can add a new action of each state that simply updates for any given incoming Nutellevel__e platform event the corresponding Nutellator__c object:

Click on the Add rule of the rule’s context menù (the three dots on the right side of the rule’s title), select Nutellevent__e on the when column, leave condition blank and set Salesforce record on the action column:

Select the Nutellator__c object and click Edit and select the following configuration:

Replicate this action on the other rules: you cannot use the Global Rules, because this kind of rules can only be used to reset context variables (such as counters, i.e. you could decide to activate the support process only if the state machine remains in the empty state for a certain amount of events received and not immediately).

Finally we need to start our support process if the level drops under the empty level.

Before adding this action, create a Nutellator__c lookip field on Nutellator__c object on the CaseEmpty Nutel-level state, stating that a new Case record should be used when state is entered (when column compiled with the State entered and a Salesforce record action):

This is the detail of the so configured Empty level:

Finally activate the Orchestration using the Activate button:

Let’s play the game!

Everything have to start with a platform event, which can be created via:

  • Apex code
  • HTTP post request

For the purpose of this post, we’ll go with the first method, and here is the execute code (made with the Execute Anonymous plugin of the ORGanizer for Salesforce Chrome & Firefox extension):

Nutellevent__e event1 = new Nutellevent__e(Nutellator_ID__c = 'TI-C04-41R1', Nutellevel__c = 99);
Nutellevent__e event2 = new Nutellevent__e(Nutellator_ID__c = 'T1-C0DE-IRI', Nutellevel__c = 99);
List<Database.SaveResult> results = EventBus.publish(new List<Nutellevent__e>{event1, event2});

We expect 2 different things:

  • Level and date/time of event are written on the Nutellator__c objects
  • The Orchestration’s state machine should have 2 devices on the Normal state (Orchestration’s Traffic tab)

Now let’s take one of the two devices and make its level drop under 30%:

Nutellevent__e event1 = new Nutellevent__e(Nutellator_ID__c = 'T1-C0DE-IRI', Nutellevel__c = 25);
List<Database.SaveResult> results = EventBus.publish(new List<Nutellevent__e>{event1});

Finally let’s drop under the 10% level and a new Case is expected to be automatically created:

Nutellevent__e event1 = new Nutellevent__e(Nutellator_ID__c = 'T1-C0DE-IRI', Nutellevel__c = 9);
List<Database.SaveResult> results = EventBus.publish(new List<Nutellevent__e>{event1});

And here is the Case related to the device:

With the given configuration you may have experienced a strange behavior: when changing state, the level on the Nutellator__c object is not updated.

This is caused by the order of the rules, that should be evaluated in the correct order: if rule makes a state transition before updatin the record, the record is simply not updated. The solution is to move the update rules before the state transitions:

What’s next

Now that we have all setup on the Salesforce platform, we can setup the real IoT platform (using Heroku?) that will send Salesforce the IoT data (transforming raw data into Platform Events) and I’ll show you how to build an Arduino powered device that will simulate the Nutellator 3000.

See you in the next post!

[Salesforce / IoT] Let’s play the game with Salesforce IoT (part 1): let’s get started

 
Have you ever heard of IoT before? I strongly believe so, it is one of the most trending topics everywhere.

It’s all about things (i.e. devices) that are connected through the Internet and can become smart by sharing their data.

Devices are only the first part of the IoT world: we cannot think of IoT without the “T” of things and we cannot think of IoT without the “I” of internet.

That said, the involved actors behind the “I” of Internet are not sentient machines that are ready to use devices data to rule humans (actually not yet) but actually platforms that can collect, elaborate and route this tiny pieces of data to make the devices actually smart!

What I want to say is that your toothbrush is not smart until you connect it on the Internet and make it talk with the “Toothbrush Factory Inc.” platform, which will hestimate your toothbrush consumption and alert you on your smartwatch or make an automatic order to Amazon.com, so you’ll receive the day after your new toothbrush head without even knowing you needed a new one!

What about Salesforce?

Salesforce is your toothbursh factory platform for Sales!

I guess the guys at Salesforce won’t like so much this similitude, and they may be right, because there is an actual distinction: Salesforce IoT is not an IoT framework that talks with the devices directly, but it is an application that works on the data produced by those devices, whose bits of data is transformed by proper IoT platforms (Google, Amazon, Microsoft, …) and consumed by Salesforce (who correlate that info with actual Salesforce objects).

This picture sums up the concept:

The Salesforce IoT product is meant to enhance your business by:

  • Combining devices data with Salesforce data to better understand devices usage
  • Orchestrating with code-less tools your business process around IoT data (you can still use “low level” Apex code to increase the complexity and customization level of your Org)
  • Increasing user engagement and customer perception (the customer knows he needs something only when the company tells him)

An example? Have a read at the dedicated Trailhead modules to better understand this concept.

An example please!

You own a company that provides rechargable Nutella stands: they are much like a coffee machine, the only difference is that the machine drops Nutella instead of coffee.

Author’s note: this is something that doesn’t exist, and it makes me really sad 🙁

You sell tens of machines all over you country and provide the “recharge” service as well, for an additional price.

Every Nutella device, which is connected via WiFi to the Internet, sends a data packet to your Google Cloud IoT instance telling the device ID and the amount of Nutella level (from now on called Nutel-level).

The platform takes those inputs and format them correctly so that Salesforce IoT platform can “eat” them (packaging them inside the so called Platform Events).

Whenever the nutel-level drops under a certain level, Salesforce IoT automatically activates its business process logic magic and a new Case is automatically opened and sent to the technicians along with a Work Order to start a field service operation.

Ready, get set, go!

Before activating the Salesforce IoT Explorer Edition let’s create the necessary metadata on our DE Org.

First we create a Nutellator__c custom object that represents the Nutella device with the following fields:

  • Device_ID__c: Text(255), unique, required, identifies a specific device by its unique id
  • Nut_level__c: Percent(3,2), % level of Nutella on the device (last measure)
  • Last_Measure_Date__c: Date/Time, date of last measure
  • Account__c: Lookup(Account), your customer
  • Location__c: Text(255), where the device is located

We could have used the standard Asset object, but I loved the idea of creating an object called Nutellator.

Second task is to create a Platform Event called Nutellevent__e: if you don’t know what a platform event is click here to learn more, but you can think of it as an ephemeral object that is not stored on the database but that can be used to trigger business logic.
The Nutellevent__e platform event is used to start the IoT flow on Salesforce: this object conveys the id of the device and the nutellevel percentage.

All is set up to start playing with Salesforce IoT Explorer Edition…but we’ll see it in the next post!

[Salesforce / AppExchange Series] Meet Accessnow, Salesforce Emergency and Privileged Access Management made easy!

This week’s guest post for the AppExchange Series has been written by Francesco Quinterno, founder of accessnow which built a Salesforce app to help emergency management…for more details jump down to this great post!

accessnow was founded by Atlanta based Francesco Quinterno and Lesley Morgan.
They’ve leveraged their experience while working for The Coca-Cola Company, Colgate-Palmolive, Warner Brothers, IBM and Coca-Cola Enterprises to build an application that enables Governance, Risk Management and Compliance on the Salesforce platform.
They can be reached at [email protected].


Our purpose as Developers, Admins, Architects is to deliver applications that improve the lives of our customers. When things go smoothly the user community is appreciative and fills us with praise. However, when things go wrong, it can be a lonely place with no praise. A place where everyone’s focus shifts to asking who and how the issue was caused. In these heated moments, it takes cool heads and swift actions to get the business “back into business.”

One of the critical tasks during an emergency is getting the right experts the right access as quickly as possible. It is not uncommon during these high pressure situations to neglect security and governance protocols and act in a non-compliant way. Sys Admins can provide super user access without any reference to an incident number or change request which is then further exacerbated when and if the access is not taken away. All the above are the ingredients for a failed audit in the months to come.

With accessnow, the premier Salesforce Emergency and Privileged Access Management application, you don’t have to compromise speed for compliance.

How?

Meet Maggie Greene, the IT Support User: when an emergency arises, Maggie creates an accessnow request.

She inserts:

  • a reference number (which can be an incident number or change request number from the Case, Servicenow, Remedy or any ticketing system)
  • the reason for needing the elevated access
  • duration of the request
  • start time (immediately or scheduled in the future)

She selects:

  • the profile and/or role
  • or permissions
  • or permissions and role
  • or a single role

Available roles/profiles/permissions are defined based on Maggie’s skillset and job function.
Once she’s selected everything, she saves and submits for approval.

At this point, the request is either automatically or manually approved based on configuration of who the requester is and what is being requested.

Notifications can be configured for all these stages. Requester can be notified of his request creation and approval. Approver can be notified there is a request pending approval.

On approval of the accessnow request, Maggie automatically receives elevated access to begin the troubleshooting process.

While troubleshooting, all changes to data, configuration changes and data views are captured in audit logs that are native to Salesforce.

accessnow also captures logs when users with an accessnow request use the Log In As function. Anyone viewing the audit logs will clearly see the changes to data or configuration were carried out by a person who was logged in as someone else.

In screenshot below Maggie Greene created request and used the Log in As function to log in as Darryl Dixon. While Maggie was logged in as Darryl she changed data on a case. She then logged out as Darryl and changed Data as herself.

Call Center Resource Management Use Case

During heavy call volumes you need help from other resources to answer the phones. Sys Admins shouldn’t be spending their valuable time changing profiles, permission sets, and roles for multiple people.

Call center supervisors create slots of time where help is required. Internal employees claim these slots and for every approved time slot, an accessnow request is generated and approved.
Once the request is approved the internal employee is assigned a Call Center profile for the time defined in the slot.

While working with the new profile, all activities are logged. Once the time elapses the internal employee’s Call Center profile is revoked and their original profiles are reinstated.

Reports

Dashboard showing the top users, request by status and the permission requested.

Value

The application’s value is that it eliminates the dependency on Sys Admins for granting and revoking temporary Privileged Access. It allows users to urgently gain temporary access on-demand and automates the approval of the request and revoking of the privileged access. It allows auditors to access logs of activities performed while users had privileged access without having to interrogate Sys Admins. The logging is vital for SOX Internal Access Controls. accessnow allows Architects and Sys Admins to implement the Least Privilege Security Model by reducing the number of permanent Administrators required in the system. It allows organizations to close the gap on GDPR articles 17, 19, 23 and 32.

Contact us at [email protected] for more information.

[Salesforce / Mobile] How to go mobile with Salesforce – Part 2

 
The second part of the guest post about Salesforce mobile adoption by our friend Barbora at Resco.net, who will analyze the issues to avoid when planning Salesforce mobile.

Barbora Piatrova (marketing specialist at Resco) takes her passion for digital marketing & Mobile CRM everywhere she goes. Currently, she’s involved in creating & mastering content strategy at resco.net – one of the leading companies in the world for Mobile CRM. She is now actively also discovering and participating in new thriving communities for Salesforce enthusiasts.


Issues to avoid when mobilizing your Salesforce data

Are you on edge when it comes to mobilizing Salesforce data to your users? Going mobile was never as easy as it is today. But is it secure and safe?

Well, it can be. If you choose the best possible scenario & set the right strategy. The goal is to get your sleek & robust Salesforce organization into phone and tablet – without losing the tools you know and prefer. And at the same time – gaining the productivity perks of native mobility as an extra.

Do you want to get your field reps prepped for a full-fledged mobile experience? Then be aware of the following issues. They cannot occur in your new application for Salesforce.

Integration & supported licenses

ISSUE: The app does not tie neatly into your Salesforce ecosystem and/or does not work with all the Salesforce licenses starting from the most basic one (Salesforce Essentials).

Platform availability

ISSUE: The app is not available on all major operating systems, for example it works only on iOS, Android, excluding Windows.

Security

ISSUE: Mobile data storage is protected, but only by user’s PIN. This means that if the user session (automatic logout after a few minutes) and app protection APIs are missing, the application and hence user data are not truly protected. All of this must be developed from the ground up. This requires complex engineering with a high risk attached to even a smallest oversight.

Offline syncing

ISSUE: The user is not able to determine what functions and formulas will operate when the app is being used offline.

ISSUE: The admin cannot regulate when and what is available for users to sync (manual & background sync, delayed publishes of Salesforce schema changes, etc.).

ISSUE: The app doesn’t allow sync of data when online for later offline backup or it syncs very slow in general.

Offline experience

ISSUE: The app is not reliable in offline mode, runs slowly and does not allow users to go offline for days without losing any of the previous work and data.

ISSUE: Admins and users can’t directly see how is the business logic mapped to the app execution.

ISSUE: When multiple users aren’t able to access, edit or change the same data offline.

ISSUE: Users are not able to configure offline datasets within the app or affiliated configuration tool.

Look ‘n feel to an end user

ISSUE: UI is not advanced and is out-of-date – the app lacks slick animations, static forms, sliding menus, interactive videos, scrolling text, interactive charts, tables, reports, and more.

Reputation & brand recognition

ISSUE: The app does not seem trustworthy, tested and proven. It doesn’t have the best reputation, bad ratings and reviews, customers that have bad experiences and do not recommend it.

Enterprise readiness

ISSUE: Your company purchases a mobile solution for Salesforce, but still doesn’t achieve the anticipated ROI. One of the reasons? It was simply not prepared for a change. Either you did not choose an app suitable for your business conditions or you did not set the right strategy, did not integrate it with your existing tools properly or did not involve your employees enough.

User adoption

ISSUE: The company does not involve users in any decision making processes, provide sufficient training, the app that doesn’t make their job any easier, because it is not easy to use, interactive and user friendly.

User tracking, audit and analytics

ISSUE: Admins are not able to track sales guys routes and locations during deployment to check compliance-GPS requirement. There is no on-site reporting or offline reports and alike features included in the app.

Flexibility in customizations & branding

ISSUE: It requires coding skills and it is not easy for a regular user to create custom objects, set rules and the functionality of objects. There are limited options to design the app for a company or even customer branding, limited object management for sales reps, and more.

Multimedia capturing

ISSUE: When all the data (photos, multimedia, geo-location services) are not captured locally. Data should be relevant to the app that is integrated with your device. It should not only enable you to capture, but to work with them – especially allow for real-time picture editing during visits and make it visible on the customer visit report.

Productivity

ISSUE: The productivity features have limitations or aren’t a part of the solution at all. Hence, the app doesn’t stress the functionalities to simplify the job of the remote workers. For example, users cannot update information on the go, and the mobile interface is not a simple action-based function, there is no easy access to notes or contact information. What else shouldn’t be missing? Contact import, calls, maps, email, push notifications, reminders, and similar.

Coverage

ISSUE: Losing connection with our product catalogue in the middle of a visit can be catastrophic and may even affect the relationship you have built with your customer. Therefore, you should be ready to always have the information available without depending on the cellular or internet connection.

Integrations

ISSUE: The more 3rd party tools and service integrations – the better. Companies increasingly demand more specific solutions to run their businesses smoother, easier and faster. If the app works as a single unit that does not efficiently communicate with external tools, it is not a suitable enterprise solution.

Language

ISSUE: Users do not have the opportunity to choose the app’s language according to the region/location, business unit or user role.

Price vs. App functionality

ISSUE: The app comes with one or more of the above-mentioned issues, and on top of that, it’s pricey? Then it may not be worth your time, money, and frustration.
If you want to mobilize Salesforce without having to run to any of such issues, here is a place to start: www.resco.net/salesforce.

[Salesforce / Mobile] How to go mobile with Salesforce – Part 1

 
A new high level guest blog about Salesforce mobile adoption by our friend Barbora at Resco.net, who will analyze the main problems and issues related to going mobile.
This is the part 1 of the article…stay tuned for the second part!

Barbora Piatrova (marketing specialist at Resco) takes her passion for digital marketing & Mobile CRM everywhere she goes. Currently, she’s involved in creating & mastering content strategy at resco.net – one of the leading companies in the world for Mobile CRM. She is now actively also discovering and participating in new thriving communities for Salesforce enthusiasts.


15 mobile native features to never forget when mobilizing your Salesforce data

Mobile led initiatives are transforming every business sphere. According to GSMA Intelligence, many mobile users exceeded 5 billion in March 2018. We get work done directly from phone or tablet. Various companies utilizing Salesforce love the ability of running their business while on-the-road.

Having access to all the Salesforce data via mobile device delivers a new kind of customer experience. This experience is of uttermost importance, especially for the companies that have large teams of sales & field service guys. Mobilizing Salesforce simply contributes to each sales and service rep’s productivity.

Mobilizing Salesforce for sales & field service

Mobile devices are built differently than desktop. On phones and tablets, native mobile apps use operating systems provided frameworks to lay out the UI. User interface is different with a native app than a web, and so is the user experience. And yet, you can still provide your field service & sales team with a unified experience with their Salesforce organization on any device.

How?

With an application that offers an ultimate mobile experience that is not weaker, nor poorer than the experience with desktop and fully integrates all your Salesforce desktop data. All of this with one application, without a need to download tens of 3rd party AppExchange apps.

Every Salesforce field user needs a complete CRM and customization capability available out-of-the box to their mobile device/tablet. A higher user adoption will only be achieved if an employee, customer or even partner gets to work with their Salesforce organization on phone/tablet as they know it. This way, they can do the majority of their business right in the field, plus take advantage of mobile-native features.

Salesforce experience + mobile-native features

What do we mean by saying ‘mobile-native features’?

Here is a list of productivity features to accelerate business process. What’s their biggest added value? You won’t find them within Salesforce desktop solution. However, everything you’ve been used to seeing and doing on your PC, you can find on your phone and more.

Imagine you had a mobile application with all your Salesforce data. And think of all the mobile-first functionalities that could simplify your work. Now, have you ever stumbled upon to what lengths could you take your Salesforce data if you had them in the palm of your hand? Just picture, you’d be able to benefit from:

  1. FULL OFFLINE ACCESS
    • access to anything anywhere without connectivity, even for days
    • continuous work with no interruptions, with no data loss
  2. GPS LOCATION and NAVIGATION
    • daily routes at a glance
    • all activities tracked with GPS coordinates
  3. ROUTE PLANNING
    • activities & routes planned on the map, distance calculation
    • user location and tracking
  4. PICTURE CAPTURING & EDITING
    • direct picture taking with built-in camera
    • photo editing (rotate, crop, highlight) on spot
  5. VOICE & VIDEO RECORDING
    • direct video documentation
    • voice recording attached to any Salesforce record
  6. BARCODE & QR CODE SCANNING
    • product information always available & quickly accessible
    • QR codes scanning to avoid manual link typing
  7. BUSINESS CARD SCANNING
    • text transcripts from a business card to a mobile device
    • faster creation of new contacts/leads
  8. PHONE CALLS FROM THE APP & CALL TRACKING
    • received, outgoing calls directly from the app for Salesforce
    • automatic call tracking

*Offline: Note, that rep on the go needs access to an entire CRM database, hence he/she needs true offline. True offline means a strongly encrypted local database on the device, and goes far beyond being able to scroll through the 15 most recently viewed (cached) contacts.

To name a few more, with advanced mobile solution, you can get:

  • CALL & SMS IMPORT, ADVANCED LOGIN (via fingerprint, NFC, QR code)
  • NATIVE APP ON ALL MOBILE PLATFORMS (iOS, Android, Windows)
  • SUPERIOR PERFORMANCE & RESPONSIVENESS
  • CONTACT IMPORT IN-THE-FIELD
  • FAST NOTE TAKING
  • PUSH NOTIFICATIONS

A holistic enterprise mobile strategy helps organizations and its field reps to have all the information in one place – to access, update, and interact with client data via mobile devices.

Mobile apps that integrate Salesforce data, allow its field users to: take action while on-the-go, speed up their operations and business thanks to a fast-mobile performance, and work with real-time sales/field service data. Working with accurate data helps reps sell smarter, find new opportunities faster, view and manage sales pipeline, be better prepared for meetings, responsive to customers, show the latest promotional offer, and more.

Curious where to find an all-in-one application that will smoothly integrate with Salesforce and help you enhance your business processes?
Reach out to mobile experts at [email protected].

[Salesforce / MuleSoft] My first Mule ESB flow

Another trailblazer joins Nerd at Work crew!
His name is Christian Tinghino and his first post is about a brand new addition to the Salesforce platform: MuleSoft.
He’s been helped by another awesome trailblazer, Ivano Guerini.

Christian Tinghino is a Senior Salesforce.com Developer at WebResults, part of Engineering Group.
He started working in 2012, moving his first steps on the Salesforce.com platform in 2014 coding in Apex and Visualforce.
Since 2015 he works in WebResults, fully focused on the development of managed packages and Lightning components.
As all enthusiast developers, he’s fascinated by innovative, challenging and strategic solutions. Owns two Salesforce.com certifications, writes blog posts on bugcoder.it, and saves the world from time to time.

Ivano Guerini is a Salesforce Senior Developer at Webresults, part of Engineering Group since 2015.
He started my career on Salesforce during his university studies and based his final thesis on it.
He’s passionate about technology and development, in his spare time he enjoys developing applications mainly on Node.js.


Few days ago the great news: Salesforce signed an agreement to acquire MuleSoft, a company that provides integration software (link).

SAN FRANCISCO, March 20, 2018 PRNewswire — Salesforce (NYSE: CRM), the global leader in CRM, and MuleSoft (NYSE: MULE), the provider of one of the world’s leading platforms for building application networks, have entered into a definitive agreement under which Salesforce will acquire MuleSoft for an enterprise value of approximately $6.5 billion.

As Salesforce.com developers and nerds we are excited by these news… so me and my colleague Ivano felt we had to take a look at Mule ESB 

Sample use case

For our tests, we want to migrate Salesforce accounts from an organization to another (Sf-to-Sf). Migrated records should dynamically receive the correct Record Type Id once in the destination org, in order to grant a correct mapping.

The flow should manage both existing and new accounts, inserting and updating records based on the presence in the destination org. For this reason, the support for the UPSERT operation is definitely a good thing.

Setup

Since we just want to evaluate the integration capability with Salesforce, we went with the on-premise Enterprise Edition (EE). This has a Salesforce connector that is not available in the Community Edition (CE). For the records, you can also choose a “Anypoint cloud” version.

Mule EE is delivered as Eclipse plugin, so you have to install the Java JDK, download and extract Eclipse. From eclipse, press Help > Install new software to add sites from that contain the runtime:

Some things never change: if you are/were a Java developer, you’ll feel comfortable with this procedure. Just install the EE runtime and Anypoint studio and you’re ready to create your Mule Project via the Eclipse interface.

When installed, a palette contains available components, connectors, transformers and so on. To use them you need to drag-and-drop it on the flow:

 Step 1 – Start flow

Mule works with flows: sets of components, transformers and connectors used to fulfil an “integration need”. Components can communicate passing payloads, reading/writing flow variables accessible by other components in the same flow. You can create custom components and transformers using Java, Javascript etc. A session context is also present, but stores variables and information beyond flows executions.

Always start from the beginning: how the flow starts?
For our test, we want to use the HTTP Listener connector to trigger the flow
http://localhost:8081/start-flow

To to this, drag the HTTP component at the beginning of the flow:

Step 2 – Retrieve origin accounts

Mule automatically connects “blocks” (components) in a flow sequence, so you just need to put a block after another to build your flow.

Drag the Salesforce connector after the HTTP Listener, so that we can query Accounts from the origin org.

To connect with the org, we need to define a configuration. The cool thing is that once a connection is configured, you can reference it just using the Name:

To query accounts from the origin org, set the Salesforce component to execute a query operation, you can be supported by a query builder tool:

Then, we assign the query result (the component payload) to a flow variable called originAccounts, using the Variable component:

Step 3 – Retrieve destination Record Types

Define a different Salesforce configuration to connect to the destination Org (as for the step 2).

Then, drag again the Salesforce component to query Account Record Types, and then store the result in a flow variable. The procedure is similar to the step 2.

Step 4 – Accounts transformation

Now we have to map different fields and apply the correct Record Type ID. We can accomplish this by using custom code and different languages.

Honestly, I had problems with Javascript because of some data type incompatibilities on Iterators. Anyway, everything worked as expected with Java, so I created a class called CustomTransformer:

The class should extend the Mule AbstractMessageTransformer class, and override the transformerMessage method, storing the result into a flow variable. For example, our flow puts the ExternalCode__c field into the ExternalId__c field, reset some fields and apply the new RecordTypeId

@Override
public Object transformMessage(MuleMessage message, String outputEncoding) throws TransformerException {
 
   ConsumerIterator rts = (ConsumerIterator)message.getInvocationProperty("destinationRecordTypes");
   ConsumerIterator accs = (ConsumerIterator)message.getInvocationProperty("originAccounts");
 
   HashMap<Object,Object> rtMap = new HashMap<Object,Object>();
   while (rts != null && rts.hasNext()) {
     HashMap rt = rts.next();
     rtMap.put(rt.get("DeveloperName"), rt.get("Id"));
   }
 
   List newAccs = new ArrayList();
   while (accs != null && accs.hasNext()) {
     HashMap acc = accs.next();
     if(acc.containsKey("RecordType") && acc.get("RecordType") != null) {
       HashMap newAcc = new HashMap(acc);
       String devname = (String) ((HashMap)newAcc.get("RecordType")).get("DeveloperName");
       newAcc.put("Id", null);
       newAcc.put("RecordType", null);
       newAcc.put("ExternalId__c", newAcc.remove("ExternalCode__c"));
       newAcc.put("RecordTypeId", rtMap.get(devname));
       newAccs.add(newAcc);
     }
   }
   message.setInvocationProperty("destinationAccounts", newAccs);
   return newAccs;
}

Step 5 – Upsert accounts

We can now proceed with the upsert operation on the destination org. We can thus use the previously configured credentials to perform the Upsert operation, defining the external id field.

Then with a combination of Foreach and Logger components it is possible to parse and inspect the upsert result in the Mule console log. After that, a transformation into a String allows to print the result to the HTTP listener page. This is not mandatory, but allows us to see how the flow run.

Done!

The full flow should look like this:

You can run a local Mule instance by pressing the “run project” button in Eclipse. To execute the flow, just open the HTTP URL defined in the step 1 and look at the upsert result directly from your page!

This is an example:

[Salesforce / AppExchange series] BeeFree: responsive Email templates

This week’s new post is dedicated to a new AppExchange app, meant to give us an awesome edge in Salesforce and that is creating beautiful, responsive Email Templates with a simple drag-n-drop editor.

Thanks to our week’s guest blogger Jitender Padda.

Jitender is the founder of CodeJinn and also an avid Salesforce developer who is always looking for innovative ways to make our lives easier on Salesforce.


The Challenge – Designing Elegant Responsive Emails

When it comes to designing perfect, responsive emails one has to make sure that it runs smoothly on all devices and this requires a team of professional Designers to toil endlessly to ensure that they create the perfect email which provides a rich and soothing vision to your customers. After all, it is one of you’re primary modes of communication with you’re audience and it needs to be done just right!

The Native Solution – Salesforce Email Editor

Let’s face the truth, even though we voted on this idea to have an improved email editor more than 1000 times, we still haven’t gotten what we would expect from a modern email editor. And now with the new Salesforce Marketing cloud, which does offer an enhanced native Email editor, it is highly unlikely we will see something like a drag-n-drop editor in our Sales Cloud or other Salesforce products. And thus we are forced to use custom HTML or visualforce which is quite time consuming and requires some technical knowledge.

Our Solution – BeeFree to the rescue

BeeFree is actually a third party app offered by MailUp. It provides you the easiest, quickest way to design elegant, mobile responsive emails. All we did was integrate it with Salesforce and now all you need to do is install the app and in a few simple steps all of you’re users (no coding required) can get access to the editor which will allow them to quickly create an awesome looking email template. Once you’re done, you can now use it with workflows, mass emails or however you like! And best of all, as the name implies, it’s free 🙂

Features

  1. It features a drag-and-drop interface that enables anyone to create a beautiful email message.
  2. It creates emails that adapt automatically to small screens, such as that of a smartphone.
  3. Once the message has been created, you can be preview, test, and save it as a custom email template.
  4. It allows you to access Merge fields inside the Editor.
  5. Save Special Links to be used directly by dragging them in to the Editor
  6. You can use BeeFree Email templates with Mass Emails.

Here is the link – https://appexchange.salesforce.com/listingDetail?listingId=a0N4V00000DZQiVUAX and just so you know, we are currently in the middle of the Security Review process so we should be listed publicly soon. And this one is for the community, thank you for you’re awesome reviews, it keeps us going 🙂

Page 13 of 24

Powered by WordPress & Theme by Anders Norén