When Salesforce is life!

Tag: Salesforce Page 8 of 24

Who needs so many records?

Today’s post has been written by Nikos Mitrakis, the creator of Forceea, an amazing Data Factory Framework for Salesforce.
Some facts about Nikos:
– Salesforce Developer at Johnson & Johnson EMEA Development Centre (EDC)
– Started his Salesforce journey in 2014
– Has passed 13 certifications, including Application & System Architect
– Holds a Physics degree
– Married since 1994, has a daughter
– Loves watching sci-fi movies and good comedies
– Lives in Limerick, Ireland


A first question you probably have when you read about creating millions of records is “Who really needs to create millions of records?” Sometimes it’s not “millions”; it’s anything between a few thousands to hundreds of thousands of records. But the need is the same: a flexible tool that can insert (and delete of course) many SObject records and will allow:

  • Companies of any size create sandboxes for User Acceptance Testing (UAT).
  • AppExchange ISV/Consulting partners create orgs with sample data for demos or for a realistic simulation of their app.
  • Testers or business users generate their testing data in a sandbox.
  • Architects create Large Data Volumes (LDV) for stress testing of their designs.

Forceea overview

Forceea data factory (a GitHub project) can create data using the Dadela data generation language. The framework can insert/update records synchronously for test methods (or for inserting a few hundreds of records) in your org, but it can also insert/delete records asynchronously.

Forceea has a rich set of powerful data generation tools and it’s the most sophisticated data factory for Salesforce. The latest release adds variables, permutations of serial values and the first function-x definition.

I can hear you asking: “How complex (or “difficult”) is to create records with Forceea asynchronously? Should I know to write code?

The answer is “Yes, you should write a few lines of Apex code. But, NO, it’s not difficult at all!”. Sometimes the data creation is complex because we must have a deep knowledge of how our SObjects are related to each other, but this doesn’t need advanced programming skills.

So, what is needed to start working with it?

  • A Template.
  • An anonymous window to execute Apex scripts.
  • A Lightning component to monitor the progress.

Let’s start with..

The Template

In my previous article How to create an Apex reusable Data Factory Library using Forceea Templates, we had constructed some Templates using an older version of Forceea. The good news is that Forceea now inherently supports Templates, so the Template creation process is simpler.

What is a Template

A Template will not create data; it’s a “description” of the structure of the data we want to create.

When we construct a Template we define:

  • The SObjects that will be created.
  • The number of records of each SObject.
  • What fields will be populated.
  • The structure of field values.

A Template is a Map<String, FObject>, so our Template will start with the initialization of this Map:

Map<String, FObject> template = new Map<String, FObject>();

Defining what data we need

Before starting our Template we should have a good understanding of the SObjects and fields we need, what are the relationships between the SObjects and what data we want for each field.

Here are our (hypothetical) requirements:

Accounts

  • Record type: the record type with name MajorAccount.
  • Name: Account-1, Account-2, etc.
  • Industry: any picklist value except Banking and Services.
  • AnnualRevenue: a random integer number between 1M and 10M.
  • Rating: any picklist value.
  • Type: any random value between Prospect, Customer and Analyst.
  • Shipping address: any (real) address from U.S.

Opportunities

  • Record type: the record type with name BigOpp.
  • Name: <Account> – <text>, where <Account> is the name of the related account and <text> is a text of random words between 20 and 40 chars.
  • Amount: a random number between 10K and 1M, rounded to nearest 100.
  • StageName: any picklist value except Closed Won and Closed Lost.
  • Type: New Business.
  • CloseDate: any date between 1 Jan. 2020 and 30 June 2020.
  • AccountId: the 1st account to the 1st opportunity, the 2nd account to the 2nd opportunity and so on. If we have no more accounts, start from the 1st account, then to the 2nd, etc.

For every 1 account we’re going to create 10 opportunities.

The template for accounts

First, we “add” the Account definitions in our template:

template.put('Accounts', new FObject(Account.SObjectType)
  .setNumberOfRecords(10)
  .setDefinition(Account.Name, 'static value(Account-)')
  .setDefinition(Account.Name, 'serial type(number) from(1) step(1) scale(0)')
  .setDefinition(Account.Industry, 'random type(picklist) except(Banking,Services)')
  .setDefinition(Account.AnnualRevenue, 'random type(number) from(1000000) to(10000000) scale(0)')
  .setDefinition(Account.Rating, 'random type(picklist)')
  .setDefinition(Account.Type, 'random type(list) value(Prospect,Customer,Analyst)')
  .setDefinition(Account.ShippingStreet, 'random type(street) group(shipping)')
  .setDefinition(Account.ShippingPostalCode, 'random type(postalCode) group(shipping)')
  .setDefinition(Account.ShippingCity, 'random type(city) group(shipping)')
  .setDefinition(Account.ShippingState, 'random type(state) group(shipping)')
  .setDefinition(Account.ShippingCountry, 'random type(country) group(shipping)')
);
  • The order of the field definitions is important! Forceea generates the values for the first field definition, then for the second, etc.
  • The Name field has 2 definitions. The first generates the same (static) value “Account-” and the second serial numbers (1,2,3,..)
  • We “grouped” all address definitions in order to “link” the correct street to the correct city, postal cod, etc.
  • If we had a Billing address, we could copy the value from the Shipping, e.g. setDefinition(Account.BillingCity, 'copy field(ShippingCity)')

The Template for opportunities

Now we are going to set the Opportunity definitions:

template.put('Opportunitites', new FObject(Opportunity.SObjectType)
  .setNumberOfRecords(100)
  .setDefinition(Opportunity.AccountId, 'serial lookup(Account) mode(cyclical) source(forceea)')
  .setDefinition(Opportunity.Name, 'copy field(AccountId) from(Account.Name)')
  .setDefinition(Opportunity.Name, 'static value(" - ")')
  .setDefinition(Opportunity.Name, 'random type(text) minLength(20) maxLength(40)')
  .setDefinition(Opportunity.Amount, 'random type(number) from(10000) to(1000000) scale(2)')
  .setDefinition(Opportunity.StageName, 'random type(picklist) except(Closed Won,Closed Lost)')
  .setDefinition(Opportunity.Type, 'static value(New Business)')
  .setDefinition(Opportunity.CloseDate, 'random type(date) from(2020-01-01) to(2020-6-30)')
);

The FObjectAsync class

Now we can proceed with the actual insertion of records. Our main tool is the FObjectAsync class.

How the async process works

When we insert or delete records asynchronously, Forceea uses Queueable Apex to execute one or more jobs. These jobs have some higher governor limits (e.g. 60,000ms total CPU time and 200 SOQL queries), which is definitely positive for our data generation needs.

If you think “I’m going to create x accounts and y opportunities”, forget this way. Forceea works with iterations! An iteration is the number of records (for each SObject) defined in the Template we use. Our template creates 10 accounts and 100 opportunities, so 1 iteration will create 10 accounts and 100 opportunities.

Another important detail is Partitioning, which has two parts:

  • Template: you define the Partition field for each SObject with the method setPartitionFieldName.
  • FObjectAsync: you define the Partition field value for all SObjects with the method setPartitionFieldValue.

The Partition field value should be a string which will identify (or “partition”) the inserted records. As a best practice, use a value with a few characters, even a single letter (uppercase or lowercase).

When inserting records, Forceea checks:

  • If there is a Partition field defined in each SObject.
  • If there is a Partition field value.

If both conditions are valid, Forceea will insert the value in the partition field of each record. So, let’s say that the Partition field for Account is ForceeaPartition__c and the Partition field value is df. In this case, Forceea will insert the value:
df1 into the records inserted in Job 1.
df2 into the records inserted in Job 2.
df3 into the records inserted in Job 3.
etc.

Insert records asynchronously

Now we are going to insert 1,000 iterations, so we’ll insert 1,000 x 10 = 10K accounts and 1,000 x 100 = 100K opportunities.

Open an Anonymous Apex window and enter the following lines:

new FObjectAsync(template)
    .setNumberOfIterations(1000)
    .setNumberOfJobs(20)
    .setPartitionFieldValue('df')
    .insertRecords();
  • The default number of (parallel asynchronous) jobs is 30. Here we require 20 jobs.
  • The partition value is “df”.

Execute the code and then go to the Data Factory tab of the Forceea Lightning app.

  • In the Log panel Forceea displays information about the execution of each job.
  • The Messages panel contains an overview of the async process.
  • The Progress panel will let you know how many iteration have been inserted.
  • Finally, the Job Status panel displays a visual indication of the status for each job (black: pending, green: successful, red: failure, orange: terminated).

Forceea will follow this procedure during the async insertion process:

  • Benchmarks the operation by inserting 1 iteration in the first batch. The transaction is rolled back, so it doesn’t permanently insert any records.
  • Executes the second batch of any job, which creates and insert records of each SObject defined in the Template, with as many iterations as possible (remember the benchmarking).
  • If there are no errors and there are more iterations to be inserted, a third batch is created, and so on.
  • When all iterations assigned to a job have been inserted, the job ends with a successful completion.

When we have a serial definition, Forceea will insert the records without any gaps in the serialization!

Delete records asynchronously

The deletion process follows almost the same logic:

new FObjectAsync(template)
    .setNumberOfJobs(20)
    .setPartitionFieldValue('df')
    .deleteRecords();

Execute the above Apex code and then go to the Data Factory tab to watch the progress.

Forceea will follow these steps during the async deletion process:

  • Reverses the order of SObjects in the Template, so the last SObject will get the first position, etc.
  • If all SObjects in the Template have a Partition field and FObjectAsync has a Partition field value, a number of jobs are enqueued for parallel processing (each job will delete all records of different partitions), otherwise it enqueues only 1 job (no partitioning).
  • The deletion starts from the SObject in the first position, executing the first batch of each job, which benchmarks the transaction to calculate the maximum number of records that can be deleted in every batch. This first benchmarking batch deletes up to 200 records.
  • If there are no errors and there are more records to be deleted, a second batch is created after the completion of the first batch, and so on.
  • When all SObject records assigned to a job have been deleted, the job moves to the second SObject, etc.

Important: if Forceea finds in the Template a definition for the RecordTypeId field of an SObject, it will delete the records of this Record Type only.

Forceea will stop the execution of a job when an error is encountered, except from the errors related to record locking, where it will raise an error only after the 5th occurrence of the UNABLE_TO_LOCK_ROW error.

Using existing lookup records

Forceea will take care of all the complex orchestration of the asynchronous process. The parallel processing offers an advantage, but it’s based on the assumption that we won’t query any existing records from the database, otherwise we may have record locking.

For example, if we have a custom SObject Language__c and we have the lookup field Language__c on Opportunity, to get random IDs for this field we would use:

setDefinition(Opportunity.Language__c, 'random lookup(Language__c) source(salesforce)')

If the above definition raises the UNABLE_TO_LOCK_ROW error (unable to obtain exclusive access to this record), then your only option is to use 1 job only with setNumberOfJobs(1).

Conclusion

Nobody can say that data generation is simple or without issues. Under the hood, the data generation process is quite complex, but it shouldn’t be to the user; Forceea will gracefully handle all the complexity.

I strongly believe that an admin, a tester or even a business user, with no Apex knowledge, can insert/delete records asynchronously using FObjectAsync and existing Templates, which a developer or advanced admin could create.

You can find the code of the above scripts in Forceea-training GitHub repo. And don’t forget to read the Forceea Success Guide; it has a lot of examples and details.

Get Started with Salesforce Data Cleansing

Il’ya Dudkin is the content manager and Salesforce enthusiast at  datagroomr.com. He has more than 3 years of experience writing about Salesforce adoption, duplicate detection issues and system integrations with MuleSoft. He also works with IT outsourcing companies to facilitate the adoption of new Salesforce apps and increase user acquisition and loyalty. 


Simply getting started with cleaning up the data in Salesforce may be a daunting challenge especially for companies that have hundreds of thousands of records or even millions. It is important to know that even if duplicates are severely hindering your marketing and sales efforts, you can bring all of the issues you are having under control and improve the overall quality of the data. If you are like most organizations and feel like the data you currently have is preventing you from capitalizing on business opportunities, we have some steps that you can take today to start the process of data cleansing. 

Know Where Salesforce Falls Short

While your investment in Salesforce may be hefty, the deduplication functionality in the off-the-shelf product is fairly limited. For example, there is no way to conduct a cross-object duplicate search. This means that your new lead may already be in your contacts and vice-versa. Also, a lot of companies have custom objects beyond the standard Lead, Contacts, and Accounts and Salesforce by itself will not be able to check those for you. If you are working with large volumes of data i.e. hundreds of thousands or even millions of records, the duplicate jobs performed by Salesforce will not be enough. In fact, Salesforce itself admits this issue in the Trailblazer Community

Keep in mind that these are only some of the shortfalls of Salesforce’s built-in deduplication features. You can find more details about why the off-the-shelf product alone is not enough to catch all of the duplicates in this article. However, now that you are aware of the limitations of Salesforce in the deduping area, you will be in a better position to choose a third-party product that meets all of your needs. 

Choosing a Deduping Tool 

If you search the AppExchange for a deduping app, you will be inundated with lots of various products that all have their individual merits. However, each company has its own individual needs which narrows down the search results to just a handful of possibilities. There are a few things you need to consider when comparing products. First of all, look for something that’s easy to set up. One of the reasons that the built-in deduplication features inside Salesforce are not very effective is because they are rule-based. This means that your Salesforce admins will have to create a rule for each type of duplicate which can prove to be impossible if we think about the various shapes and forms of fuzzy duplicates. 

A much better approach would be to choose a tool that uses machine learning to catch the duplicates. This offers you several benefits. First of all, you already eliminate all of the issues and hassles of setting up rules since the algorithm will learn to identify future duplicates without explicitly programmed to do so. You are also simplifying the setup process since the product will be ready to use right away. The machine learning algorithms do the heavy lifting and all you have to do is append the field values for the master record. A lot of products also allow you to automate the duplicate checking process which is always helpful given that new duplicates appear all the time. 

Thoroughly Plan Out the Process

One of the biggest mistakes a lot of companies make is that they start thinking about the endgame right away instead of focusing on how data enters Salesforce. For example, if your users are manually entering data into Salesforce or making edits it can be very easy to make a simple typing mistake which causes all kinds of confusion. Automated data imports are not foolproof as well since a lot of time the data is incomplete and if any of the fields required by the object are missing the import will fail. Therefore you need to account for all of the duplicate data entry points and plan out how you will address all of these issues. 

In addition to planning out the technical aspects of implementing the deduplication tool, you will also take into consideration the human factor i.e. any issues the end-users will have while getting accustomed to the new product. This will also require some planning since you don’t want to make a sudden change which interrupts the workflow of your employees. Also, be sure to provide user training since it will take your employees some time to get adjusted, especially if there is a complex setup process involved. 

Set Attainable Goals

Recent data shows that somewhere between 10%-30% of the data inside a company’s CRM is duplicate data. The key metrics you should be monitoring are accuracy, consistency, and completeness. The accuracy of the data is best measured through business interaction since this provides you with real-time insights. If this is not possible, then you should use independent confirmation techniques. Pay close attention to the ratio of data to accuracy which will identify known errors. This includes missing or incomplete information that could potentially be located in a duplicate record. If all of the processes you are implementing are proving to be effective, then the ratio should increase over time. 

When we look at consistency, this refers to conflicting data. When you have duplicate records they will usually contain several versions of the truth and you have to append the entries to identify the master record and merge all of the duplicates. If you have conflicting data, you will not be able to get a complete view of your customer and you could be aligning your strategies incorrectly. This is where the completeness of the data comes in. Try thinking about all of the data scattered among duplicate records as pieces of a large puzzle that give you invaluable insights about the customer. Combing through all of the records manually or even with a rule-based application will prove to be very time-consuming if not impossible since it will not be possible to create a rule to fit each scenario. 

Constantly Collect Feedback

We mentioned the importance of monitoring some of the key metrics in your deduplication efforts, but listening to the actual people using the tool on a daily basis is just as important if not more. They could provide you with valuable insights that data may not be able to measure. For example, they could tell you that they are not trusting the tool to properly cleanse the data or that they are still spending more time than they would like fixing some of the duplicates manually and a lot of other constructive feedback. At the end of the day, you have to remember that the reason you are installing this particular app is to assist the people on the ground communicating with customers. If they are telling you that this thing just isn’t working, then this should be the most important factor in deciding to make a change. 

Don’t Postpone Deduping Your Salesforce 

While the duplicate issue may have snowballed into a big problem for many companies, they are unwilling to start tackling this problem given the magnitude and the number of resources it will require to properly deal with this problem. However, you always have to keep in mind that these duplicates are constantly draining your resources. As a general rule, keep in mind the 1-10-100 ratio. It costs $1 to verify the quality of the data you have, $10 to eliminate each duplicate, and $100 for every duplicate that is left unchanged. If you have hundreds of thousands or millions of records, such costs could really add up, which is why you should not delay deduping your Salesforce. 

Getting Started with AI Personalization in Salesforce Marketing Cloud

This guest blog is delivered by Leah Fainchtein Buenavida, a technology writer with 15 years of experience, covering areas ranging from fintech and digital marketing to cybersecurity and coding practices.


The demand for constant, personalized, seamless customer experiences is always growing. According to the Connected Customer report, 67% of customers say their expectations for good experiences are higher than ever.

Marketers understand the value of personalization. However, delivering personalization at scale remains a challenge for many brands. Tools like SalesForce Marketing Cloud can help you automate personalization, make data-driven decisions, and create dynamic content. 

This post reviews some of the platforms included in the Marketing Cloud platform, with a focus on the Personalization Builder.

What is Salesforce Marketing Cloud?

SalesForce Marketing Cloud is a marketing platform that provides multiple tools for managing the interaction between a brand and its existing or potential customers. The platform enables you to contact customers on the right channel, at the right time via email, SMS, or social ads, create multi channel experiences, and increase sales and customer acquisition. 

This model of Marketing Cloud is based on the ME2B approach, where customers are defining the kind of relationship they want with brands. Brands need to create experiences that promote trust and strong connections with their customers. Marketing Cloud tools enable them to establish this relationship and collect important information about their users, their preferences, and opinions.

What Can You Do With Marketing Cloud?

The Salesforce Marketing Cloud consists of seven primary products that leverage Artificial Intelligence (AI) technology and predictive analytics to connect you with your customers. You can interact with customers through mobile messaging, email, digital advertising, social media, and website content. Primary products include Email Studio, Mobile Studio, Social Studio, Advertising studio, Einstein, Journey Builder, Personalization Builder.

AI-based marketing tools can turn standard content into hyper-personalized messaging. Personalized emails, for example, have a greater chance of being opened and engaged with than their traditional alternatives. Only the most relevant communication can generate a positive response from your audience. Marketing Cloud helps you place the right content into your web messaging and email along the customer journey. Machine learning capabilities enable you to continuously improve and adapt the customer journey, keeping content relevant and engaging.

Salesforce Einstein

Einstein integrates Artificial Intelligence (AI) technology with Salesforce’s Customer Relationship Management (CRM) system. Einstein uses predictive analytics, machine learning, and Natural Language Processing (NLP) capabilities to analyze customer data. Einstein’s AI algorithms leverage this analysis to improve its capabilities, and perform more accurate  analysis. This technology can analyze user data in different sectors: 

  • Sales—helps to increase conversion rate by predicting the probability of a customer to purchase a product.
  • Customer service—helps with predicting issues, classifying and routing of cases. Also includes intelligent chatbots to help customers resolve common problems.
  • Marketing—it helps to increase conversion rates by predicting who is more or less likely to engage with an email by engagement scoring and predictive recommendations.
  • Retail—uses customer data to identify the products a visitor might want, both in digital and traditional commerce. Brands can increase revenue by predicting if a consumer is more or less likely to purchase a specific item.

Salesforce Personalization Builder

The Salesforce Personalization Builder is based on Einstein. It uses predictive analytics to deliver personalized content to customers based on their preferences. The Personalization Builder shows you the behavioural history, real-time interactions, and buying preferences  of customers. You can use this data to predict how they will act in the future, and understand the motives behind their actions. These insights can help you improve your customer engagement strategy across all relevant channels. 

You can use Salesforce Personalization Builder together with other Marketing Cloud platforms to further improve your marketing campaigns. Content Builder for instance, manages all content and assets in one location. You can use Analytics Builder to track the performance of content campaigns, measure their success and use Personalization Builder to extract valuable insights from the data to adjust a campaign accordingly.

How to Set Up the Personalization Builder in Marketing Cloud

The process below describes several technical steps that can help you set up the Personalization Builder in Marketing Cloud.

Step 1content or product

Decide if you want to use content or product or both, and set up your catalog with field attributes, and activity tracking for category view, purchases, shopping cart. A catalog stores all your content or product data with as many details as possible. This includes price, URL’s, stock, description, keywords, and categories.

Step 2import your catalog

The catalog importing task in Personalization Builder is time consuming. There are two possible options:

  • Flat-file upload—the uploaded file is added to a publicly available web URL or an FTP account and imported into the Builder twice a day.
  • Streaming updates—updates and adds content and products through another snippet of JavaScript.

You have the option to map your catalog fields with the default fields of Marketing Cloud, and select which fields to tag. Tagging determines the fields that are used to build your affinity per profile. For example, tag the fields for category and brand, where brand is Toshiba and category is TV.

Step 3data collection

Once the catalog is in place, you can start collecting user data. You might need help from your developer to implement all the necessary Collect Codes for behaviour tracking. These Collect Tracking Codes are JavaScript snippets that are used to gather data about known contacts and unknown visitor behaviour.

First, collect the available JavaScript snippets of Marketing Cloud. Start with the Base Collect Code that needs to be implemented on every page of your web site. Find the code in the official documentation

Next, you need to capture user attributes and information. This script identifies your unknown visitors:

<script type=”text/javascrip
	_etmc.push([“setOrgId”, “MID”]);
	_etmc.push([“setUserInfo”, {“email”: “INSERT_EMAIL_OR_UNIQUE_ID”}]);
	_etmc.push([“trackPageView”]);
</script>

Step 4enable the Personalization Builder

Turn on the data extensions of Einstein. Follow the next steps to enable the Personalization Builder populate these data extensions in Contact Builder.

  1. Navigate to the Personalization Builder Status tab.
  2. Reveal the drop-down menu by choosing the grey Settings cog
  3. Click on Data Extension Settings
  4. Click on Enable Einstein Data Extensions
  5. Click Save.

Conclusion

The most common challenge that brands encounter when using SalesForce Marketing Cloud is not knowing how to use its different tools. Marketers need to know which tools to use at the right time, and for what purpose. Salesforce Personalization Builder helps you better understand how your customers behave and why. It enables you to see your customers’ behavioural history, real-time interactions, and buying preferences. You can set it up with only a few technical steps.

Salesforce is Retiring Their Data Recovery Service. Here’s What You Should Know.

Mike Melone is a Content Marketing Manager at OwnBackup, a global leader in SaaS business continuity and data protection solutions. New to the Salesforce ecosystem, Mike brings a decade of experience in copywriting and marketing to OwnBackup, where he curates original content related to all things Salesforce backup & recovery.


Salesforce is the most secure and available platform in the industry. Yet data protection remains a shared responsibility as it does with all modern SaaS platforms. Customers are responsible for preventing user-inflicted data and metadata loss and corruption and for having a plan in place to recover if it happens. That’s why Salesforce recommends using a partner backup solution that can be found on the AppExchange.

Despite this recommendation, our 2020 State of Salesforce Data Protection survey found that 88% of companies are lacking a comprehensive backup and recovery solution, which may be especially risky with many companies increasing their remote workforce. Furthermore, 69% say their company may be at significant risk of user-inflicted data loss.

Salesforce Data Recovery Service Is Being Retired

Last year, Salesforce announced that they will be retiring their last-resort data recovery service, effective July 31, 2020 because it has not met their high standards of customer success and trust. For customers who aren’t proactively backing up their data, Salesforce currently offers this last resort Data Recovery service. However, at a cost of over $10,000 and a 6-8 week recovery time this service is not an adequate option for many customers. The upcoming retiring of this last resort Data Recovery service is an excellent opportunity to remember that a proactive backup and recovery has always been a required and recommended best practice. Salesforce offers a number of native backup options to minimize your business risk of user-inflicted data loss.

Native Salesforce Options

Weekly Export and Report Exports

Users with “Data Export” profile permissions can generate backup .CSV files of their data on a weekly basis (once every 6 days) or on-demand via user-generated reports. The export can be scheduled and then manually downloaded when ready. Weekly Export backups can include images, documents, attachments, and Chatter files. In the event of a user-inflicted data loss, users must manually restore by uploading their .CSV files in the correct order. Because this process can be challenging OwnBackup has created a free eBook with a step by step process to Recovering Lost Data Using Salesforce Weekly Export.

Data Loader

Data Loader is a client application for the bulk import or export of data. It can be used through the user interface to bulk import or export Salesforce records through .CSV files, as well as Insert, Update, Upsert, Delete, or Export Salesforce records as .CSV files. Data Loader command line can also be used to backup data to or from a relational database, such as Oracle or SQL Server. Restoration of lost or corrupted data would be manual.

These native methods include data, but no metadata. In order to proactively protect your Salesforce platform, you should back up metadata and attachments as well. Without this vital piece, putting the relationships between your Salesforce data objects back in place can become a painstaking process.

Without the ability to maintain relationships, you’ll only have partial restore capabilities. If an account is accidentally deleted, all of the contacts, cases, tasks, and other records will be deleted as well. If you only restore the account, but not it’s dependent child records, your recovery capabilities are only providing you partial coverage.

Having a copy of your data is important to meet the minimum standards of a backup. The real challenge is the ability to restore data back into Salesforce. Plan for regular data recovery testing to ensure that you are prepared for any unexpected data events. You must test your strategy so you’ll be aware of what will actually happen if you were to experience a data loss or corruption. When you test, you may find that you are unable to recover in the time or to the fullness that you require. When testing, check:

  • Are you able to recover specific versions of document or data or metadata?
  • Are you able to minimize data transformation during the restoration process?
  • How does your strategy handle different types of restore processes?
  • What is the performance and time to restore?

How to Trust in Your Salesforce Data Recovery Plan

At the beginning of this article we mentioned Salesforce recommends using a partner backup solution that can be found on the AppExchange. The reason they recommend this is almost entirely due to the challenges most experience when attempting to recover lost or corrupted data and the fact that metadata cannot currently be backed up using native backup methods.

OwnBackup, a top-rated partner on the AppExchange, is here to help you identify the five concepts that really matter when it comes to a comprehensive backup AND recovery.

1. Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

For many companies, having unresolved data loss or corruption for over one week could equate to millions of dollars in lost revenue. As you construct your data recovery strategy, you will need to define your recovery point objective (RPO) and recovery time objective (RTO).

The RPO is the amount of data a company can afford to lose before it begins to impact business operations. Therefore, the RPO is an indicator of how often a company should back up their data.

The RTO is the timeframe by which both applications and systems must be restored after data loss or corruption has occurred. The goal here is for companies to be able to calculate how fast they need to recover, by preparing in advance.

What the recovery time and recovery point end up being are deeply influenced by backup frequency, backup retention, your ability to compare current and past data, and to restore just the data that has been impacted. RTO and RPO are two parameters that help minimize the risks associated with user-inflicted data loss

2. Data Integrity

Data integrity means that you should have a complete backup of your Salesforce environment. Backing up a few critical fields or records isn’t enough. Salesforce is a relational database that doesn’t allow admins to manipulate or populate the record IDs. All of the relationships in Salesforce environments are based on these record IDs. In the case of a data loss, a cascade delete effect will remove all of the child records associated with that account.

This is one of the big reasons why an audit trail is not a backup and recovery solution. You never know what the scope of a data loss will be in the future. For that reason relying on field level changes on a subset of objects as a reliable backup solution is a fundamentally flawed strategy.

Therefore best practice is to have a full backup of all of your Salesforce data, metadata, and attachments. You may also want to have a partial, high-frequency backup for critical objects that change more than a few times each day.

Another best practice for maintaining data integrity includes having restore capabilities that allow both full and granular restore to prevent an accidental overwrite of legitimate changes.

Finally, the integrity of your backups themselves are critical, particularly for companies with internal data security policies or regulatory compliance concerns.

3. Security

Your company is likely required to follow security requirements. Your backup and recovery process shouldn’t be excluded from these security requirements. Saving .CSV files of your Salesforce data within the company hard drive or your laptop should not be considered a best practice. Ideally, your backup data should be saved to trusted cloud storage. To meet compliance requirements the data should also be encrypted both in transit and at rest.

You should also consider the permission settings and accessibility of your backups. With your current backup process, could an inappropriate user access and delete data.

4. Reliability

A reliable backup and recovery plan includes automated change identification, scheduled backups, proactive data change monitoring, and the ability to reach out for technical support if needed.

You should have the ability to automatically identify changes that have occurred in your schema. A tool that can identify what has changed between backups is important for data, metadata, and attachments.

Your backups should run daily at a minimum. Schedule your backups to run on a set schedule which is easy to define and monitor.

With so many end-users with read-write and delete permissions or merge abilities, proactive monitoring is crucial. A reliable data backup and recovery plan allows you to quickly find out when unusual changes have occured or if a backup has failed.

5. Accessibility

Storing data backups outside of Salesforce is a best practice for accessibility. If Salesforce were to become temporarily unavailable, you would still have access to your data and metadata.

Data backup portability is a must for an optimal data backup and recovery plan. Data movement should be able to take place securely and conveniently. Whether you’re looking to conduct more complex reporting on large data sets with BI and ETL tools or want to maintain legacy business continuity and data recovery procedures, your data should be yours to use as you please.

Data accessibility is also a key requirement of GDPR and CCPA. In support of these regulations, you should have full transparency into their data, even backups. To comply with these regulations, you’ll need to be able to search for where a Data Subject’s data, including attachments, resides within their backups.  Additionally, you must have the ability to rectify or forget an individual’s data in your backups for full compliance.

 

Protect Your Data and Maintain Business Continuity with Comprehensive Backup and Recovery

By making the tough decision to retire their Data Recovery service, Salesforce has reiterated their commitment to customer success and trust. Now is the perfect time to reassess your current backup and recovery strategy. Were you completely relying on Salesforce’s Data Recovery service? Why not start backing up with their native Weekly Export service instead? Any backup strategy is better than doing nothing at all.

We’d also encourage you to explore Salesforce AppExchange partner solutions, such as OwnBackup. OwnBackup brings ROI to its over 1,700 customers every day by helping them protect their Salesforce data. OwnBackup customers are almost 3x more likely to notice a data loss or corruption and they feel 3x more prepared to recover.

Now is the perfect opportunity to take a step back and put a comprehensive backup and recovery plan in place to ensure that your Salesforce data has the same level of protection as your other critical systems.

To request a demo and learn more about OwnBackup and our solutions, click here.

Salesforce experts against Covid-19 FTW!

Global leader in Salesforce recruitment, Mason Frank International, recently got in touch with me about an opportunity for an important project they were working on. In response to the Covid-19 crisis, they have collated expert advice from leading Salesforce MVPs and professionals into an industry whitepaper, of which I am proud to say I am part of.  

The whitepaper covers five common challenges many are struggling with at this moment: cost saving, data security, remote working, growing at scale, and business continuity.

I am honored to have been involved in this project, providing Salesforce advice to those who need it. I hope my own and my peers’ insight helps those looking for answers at this unprecedented time.

Please do have a read and share with anyone who would benefit from it: https://www.masonfrank.com/overcoming-business-challenges-with-salesforce/

Thank you, and stay safe!

Salesforce Data Management 101: Know Your Storage

Today’s guest post is delivered by Gilad David Maayan, a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.


When developing an app, you need to know how data is stored, structured, and organized. This information is crucial when building, maintaining, and updating your software. It can also help you understand what are the capabilities of this build, how far you can take it, and when it will need to be scaled up. 

In Salesforce, you can use two types of storage for data and for files, but there are five methods designed for specific use cases — files, CRM, documents, attachments, and knowledge. In this article, you will learn how storage works in Salesforce, including tips to help you avoid hitting your storage limits.

How Data is Stored in Salesforce

When working with Salesforce, there are several reliable and efficient ways to store your data. This includes media files, customer profiles, documents, and presentations. This storage is broken down into two types — data and file. 

Data storage includes many fields, such as accounts, cases, custom objects, events, opportunities, and notes. This data is automatically stored within the Salesforce database and you do not have individual control over where specific items go. 

File storage includes attachment files, customer content, media, Chatter files, documents, and custom files in Knowledge articles. This content you can individually control depending on how it is created and attached. Below are the five methods you can use for file storage.

Files

Salesforce Files is a storage location you can use to store any type of file. Salesforce has positioned it to replace most of the following methods as it offers more features and functionality. Files enables you to follow specific files, generate links, share files with users or groups, and collaborate on files. In Files, each file can be up to 2GB. 

Customer relationship management (CRM) content

Salesforce CRM content is where you can store files that you want to publish and share with coworkers and customers. For example, presentations or content packs. This can include marketing files, document templates, media, or support files. This storage type supports files up to 2GB although this drops to 10MB depending on how you upload data. 

Documents

Documents storage enables you to store a variety of web resources, including logos, email templates, and Visualforce materials. When files are stored here, you do not have to attach data to specific records. In Documents, files can be up to 5MB.

One thing to keep in mind — if you are using an older version of Salesforce Documents storage is still available. However, if you are using Lightning Experience, this functionality has been replaced by Files. When you update your Salesforce, you need to convert your Documents to Files before you can access your data. 

Attachments

Attachments is a storage area you can use for files that you want to attach to specific records. For example, marketing campaigns, cases, or contact information. The downside of Attachments is that you can’t share files with links and do not have access to version control. In Attachments, files can be up to 25MB and feeds can be up to 2GB.

Knowledge

Knowledge is a storage area you can use to create and store knowledge base articles. These files can be searched by internal users and shared with customers through your portals or Lightning Platform Sites. In Knowledge, each article can be up to 5MB.

How to Avoid Hitting Your Storage Limits in Salesforce

Regardless of how you store and manage your files in Salesforce, you need to be aware of what your storage limits are and how to make the most of those limits. You should also be aware of what alternative options you have to expand your storage. 

Storing data outside of Salesforce

Sometimes, the most practical option is to store some of your data outside of Salesforce. One reason for this is your storage limits. In Salesforce you are allowed:

  • Data storage—10GB of base storage plus 20MB of storage per user. If you are using Performance or Unlimited versions, user storage is 120MB per. However, the Developer, Personal, or Essentials versions follow different rules with no user data and 5MB, 20MB, and 10GB respectively.
  • FIle storage—for most plans you get 10GB per organization and from 612MB to 2GB per user. For the Developer and Personal plan you get 20MB, and for Essentials you get 1GB. No user data is provided for these plans.

Even if your data is still within storage limits, keeping redundant or unnecessary data in Salesforce can cause issues, including:

  • Degraded performance
  • Inaccurate reporting
  • Inefficient searches

To avoid these issues and ensure that your limits are not exceeded, you might consider adopting a cloud storage service. These services can provide scalable, cheap storage that you can connect with API or third-party extensions to your Salesforce system. 

For example, Azure File Storage by NetApp can provide a standard file system format that you can use from anywhere, including hybrid systems. Or, AWS S3 services can be connected for unstructured storage and any type of data. 

Cleaning up unwanted data

Maybe you do not want to store data outside of Salesforce or you have already moved data but still want to improve storage efficiency. In these cases, you can focus on cleaning your data. You can do this either manually or automatically depending on the type of data you’re trying to eliminate. 

For manual clean-up, Salesforce provides a native deletion wizard. You can use this wizard to eliminate old accounts, contacts, activities, leads, or cases. To identify data that is safe to remove you can run a report to see when data was last used and eliminate things before a certain date. Or, you can individually delete data as users inform you it’s no longer accurate.

Another option is to use extract, transform, load (ETL) tools to pull your data, process it (removing unnecessary data), and load the remaining data back in. This option enables you to script clean-up based on whatever parameters you’d like. However, it can be a lengthy process and requires the help of external tools, such as Salesforce Data Loader or Informatica.

Archiving data

During your data downsizing, you will probably find data that you no longer need in your system but that you don’t want to delete. For example old client files that you need to keep for compliance, historical customer reports, or knowledge base articles for legacy products or services. 

If you have data like this that you want ‘just in case’, archiving is your best option. Archiving enables you to export data from your system, compress it for efficiency, and store it wherever you prefer. 

Often, the previously mentioned cloud services are a good option for this. Many services have cold storage tiers available that are much cheaper than on-premise storage. These services enable you to store large volumes of data that you rarely need to access and can eliminate worries about data corruption or loss due to hardware failure

Conclusion

Salesforce comes with a specific data management build that you need to comply with. The two basic data types are data and files, and these are sorted further into five organizational types — files, CRM, documents, attachments, and knowledge. However, you do not have to use all of these. Recent Salesforce change enables you to store most of these elements as files. 

Whichever structure you choose, be sure to continually monitor and optimize your storage. Adding monitoring on a regular basis can help you optimize both performance and billing. To avoid hitting your storage limit, you can store data outside of Salesforce, clean up unwanted data, and archive cold data. 

Survival Guide for duplicate management in Salesforce

Dario is senior consultant and project manager @ TEN,the first Salesforce italian partner.
Totally in love with Salesforce from 2013, he achieved 9 certifications so far and an extensive experience on projects’ implementation.
Based in Milan, he’s passionate about Formula 1 and basketball.


Duplicate management on crm is always a hot topic during a Salesforce implementation. Data is often stored in different silos like your ERP, E-commerce, Marketing tool. 

When you migrate customer data from different sources in your Salesforce org, there are a lot of options out there to help on duplicate issues. One of the best ways is to implement a SSOT (Single Source of Truth) with Salesforce Customer 360 or an MDM tool, but this option could affect the budget.

In this article you can read one of the less expensive ways to find and merge duplicates in Salesforce.

Standard Duplicate Management

Duplicate management in Salesforce is a real time process that let you block during creation of new leads, accounts and contacts or alert and report on potential duplicates.It’s included in your Salesforce license and implementation is divided in simple steps:

Create a matching Rule

  • Select the object:
  • Give a name and choose fields to consider for looking duplicates (example account name, Shipping street)
  • Type of match (exact, fuzzy) for each condition
  • Save:

Create Duplicate rules:

  • Identify object where find duplicates (example account)

  • Define Actions (allow/block), and record exclusion
    if you block no record creation will be allowed
    if you allow is possible to choose to alert users during creation and report on specific objects
  • Associate matching rule (up to 3) used to find duplicates (created in the step before)
    You can also decide to use standard matching rule provided by Salesforce

Hints

Matching rules are cross object, for example you can compare Accounts with Leads or with Contacts and vice versa.

If you choose alert on action you can report on duplicate object. This option let admin or super users to decide if merge or not potential duplicates manually and create tasks with process builder like a reminder.

Automate your merging activity 

What should be a solution to automate merging? If you have a lot of records, merging manually could become time consuming.

An economic solution is an appexchange product: Cloudingo. The logic is similar to standard duplicate management, but this solution gives more powerful tools that let you automate the job! 

These are the main steps.

Create Filter

Choose your Salesforce object (example Account)

Don’t worry about privacy, Cloudingo doesn’t store any data outside your Salesforce org.

Select Fields for match

Select fields to consider for looking duplicates (example Account name,Billing street)

Exclude records

Consider if you would like to exclude certain types of records from the dedupe process. For example partner account and finally save

Automation rules

Create a new rule, choose the object and give a name

Decide how merge records

Is where you define the specifics of how you want Cloudingo to merge your duplicate records.

Every rule created before needs to be associated with an automation.These are main steps to follow:

Master record selection

  • Decide in a group or record which is the master, for example most complete or first record.
  • Order groups by a date
  •  If you have chosen a master definition that may results in more than one record having the same value, you can break ties using created or modify dates

Field value selection

Here you can define how all field values on the record must be populated.

For example the most common or the value from the newest record in the group.

You can also choose to override the master value or override only if fields are blank (master enrichment).

In the section specific field values, you can set rules for specific fields on the object. These rules will supersede any settings above.

This is useful for:

  • Modify logic on external ids fields, to preserve original values on master and avoid mix up keys. 
  • Sum all record values together in a field to determine what value to set on the master.

The final section defines situations where Cloudingo will not merge records or the entire group of records.

Remember to assign the rule to the filter you want to apply!

Dedupe dashboard 

You can create a lot of filters and apply the same automation rule or different to everyone. The result is easy to view in your dedupe dashboard:

  • Groups represent the number of groups of account duplicated
  • Matches are the single account potentially duplicated

Schedule automation 

Here you decide when the rules have to run an merge duplicates with automation rules set before.

Before setting your schedules, consider to run filters in a correct time slot, for example when users activities are reduced and there are no automatic batch uploads from your middleware or from other systems.

Hints:

It’s strongly recommended to connect first your full sandbox environment and test dedupe results before running on production. You can easily migrate filters and automation from test to production in Cloudingo.

Be careful not to lose data

Your legacy systems may need to maintain data or keys for your duplicated account. If this is your case, you can consider implementing a simple trigger to get back accounts deleted by Cloudingo in your recycle bin and write it in a custom object. 

Your records will maintain original values on fields and will be related to the master account.

Summary

In a parallel stream is very important to understand how to prevent duplicate generation, for example promoting customer registration instead of guest purchasing.

There aren’t magical hints for good working filters, it’s all related about data. If the original database is dirty, you can think about cleaning and enriching with external services before applying dedupe filters.

This guide is not a final solution, but can help you to understand and find a cheaper way to manage duplicate issues.

How to Secure Salesforce Workloads: Tips and Best Practices

Today’s guest post is delivered by Gilad David Maayan, a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.


Salesforce provides security controls for your data, categorized according to organization, object, field, and record level. To properly secure your Salesforce workloads, you must first understand the Salesforce data security model, as explained in this article. You will also learn tips and best practices for data sharing, auditing, session configuration, and encryption.

Salesforce Data Security Model

Within Salesforce, you have full control over what information users can access. This extends to articles, records, and individual fields. Each security concern is categorized into a level, which enables you to control certain aspects of security.

Organization Level Security

Organization level security settings enable you to determine who has access to your Salesforce system, including from where and when.

At the organizational level, you can define:

  • IP restrictions—determines what IP addresses users can access data from.
  • Login access—determines timeframes when users can access data.
  • Password policies—determines the life cycle of passwords, required complexity levels, and reusability. 

Object Level Security

Object level security settings enable you to guide how objects are handled, including creation, access, and modification.

At the object level, you can define:

  • Profiles—determines who is allowed to do what with objects. This is based on individual users with individual create, read, edit, delete (CRED) settings. 
  • Permission sets—enables you to extend permissions granted to user profiles in a standardized way.

Field Level Security

Field level security settings enable you to restrict specific fields according to user profile. For example, you can determine who can see an employee’s compensation information. For those without permission, this information is hidden from view or access.

Record Level Security

Record level security settings enable you to determine how and by whom records are accessed or shared. 

At the record level, you can define:

  • Organization-wide sharing defaults—determines how freely records can be accessed if profile permissions are not defined. 
  • Role hierarchy—enables you to grant tiered permissions. This grants higher level users, such as supervisors, access to all data of the users below them. 
  • Sharing rules—determine how you can share information and who with. You can use these rules to define lateral sharing or to allow access outside your organization.
  • Manual sharing—enables you to grant record limited sharing permissions. For example, if only one specific user needs access to a record. 

Salesforce Security Best Practices

When configuring or auditing your data security settings, there are several best practices you should apply. These practices can help you increase the overall security of your data and ensure that customer and employee privacy is protected.

Data Sharing

Data sharing policies often aren’t used exclusively for security purposes but these policies can significantly impact security.

For example, you should carefully choose between hierarchical sharing and use of Public Groups. Keep in mind that hierarchical sharing provides a higher tier user access to all data of those below them. In contrast, Public Groups enable you to define sharing rules regardless of where users fall in a larger hierarchy. 

You should also take care with how you allow owner sharing. When records are shared manually by owners you have limited ability to track who has access. You can use the Developer Console to manually identify which records are shared but this is not practical on a larger scale. Additionally, when records swap owners, this information is lost. The lack of visibility this creates can be a liability if owners are sharing sensitive information without approval. 

Audit Regularly and Watch for Vulnerabilities

As with any system, you should make sure to regularly audit your configurations and settings. Audits can help you identify configurations that have been changed manually or automatically due to updates. It helps you identify users or roles that are no longer valid and that should be removed. Auditing can also help you identify inefficiencies in your current roles and groups and point to how these aspects can be streamlined or refined. 

It is also a good idea to regularly check for Salesforce security vulnerabilities in a vulnerability database, and take action if necessary. There is also a standard SalesForce procedure that allows you to perform a full security assessment and penetration test of the SalesForce platform to ensure it meets your security requirements.

Session Settings

Session settings provide you control over individual user sessions, including verification and timeout settings. Verification settings enable you to specify whether or not multi-factor authentication is needed. This is activated via the “Raise session to high assurance” setting. This feature is available for a variety of data and services, including reports, dashboards, and connected applications. 

Timeout settings enable you to define for how long a session is authenticated and for how long inactive sessions should persist. When setting this, you need to find a balance between convenience and security. You don’t want your users to have to log-in every thirty minutes but you also don’t want sessions active for hours after a user is done with the system for the day. 

Shield Platform Encryption

Shield Platform Encryption is a natively integrated service that enables you to encrypt your data in-transit or at-rest. You can use it to extend the built-in encryption that comes with Salesforce by default. 

With Shield Platform you can encrypt a range of data, including:

  • Fields—includes a range of standard and custom fields
  • Files—includes attachments, notes, PDFs, and images
  • Data elements—includes analytics, search indexes, Chatter feeds, and Change Data Capture information

Shield Platform Encryption works via keys managed either by you or Salesforce. If you use Salesforce managed keys, you can create keys based on a master secret and organization-defined key material. If you wish to manage your own keys, you can use the Cache-Only Key Service to fetch the key as needed. 

Apply the Principle of Least Privilege

When creating permissions, access controls, and roles, be sure to enforce the principle of least privilege. This principle specifies that only the minimum functional amount of access is provided. These limitations help reduce the damage that users can accidentally or purposely create. It also limits any access provided by compromised credentials. 

Conclusion

Salesforce provides you with the majority of the features and tooling needed for basic security. The organization level enables you to configure access control, object level is for profiles and permissions, field level restricts access to fields, and record level enables you to create a record access hierarchy. 

Once you configure your security settings, you should set up sharing procedures, audit regularly, configure and monitor session restrictions, encrypt data, and apply the principle of least privileges. 

Marketing Cloud and Deployment Manager: the beginning of a new love story

This post is delivered by Alex Rogora, an Analyst and Cloud Administrator specialized on Marketing Cloud. Thanks to this multipurpose profile – a computer science background, university degree on Communication and several years of work experience in mail marketing – he is member of the Solution team for WebResults (Engineering Group), that studies new technologies released by Saleforce.


The life of a Marketing Cloud consultant can be hard, especially when it comes to migrating configurations from one Business Unit to another.

Of course, some assets or Data Extensions can be shared between multiple BUs in the same environment, but if we talk for example about customer journeys, automations, or data model, the only solution was to replicate the configuration manually without any type of support.

This was true until few months ago, when the Deployment Manager made its quietly debut on the AppExchange.

As reported in the dedicated page, Deployment Manager is a Salesforce Labs app that “allows users to import/export Marketing Cloud campaign configuration”.

Easy, clear and even free: let the revolution begin!

WHAT CONFIGURATIONS ARE WE TALKING ABOUT?

The first features enabled in April 2019 were the canvas structure of customer journeys and in a short time  the export of Data Extension schemas were added as well.

To be honest, not every element that make up the journey is replicated at the moment: for example the entry source is missing and for some activities there is only the placeholder, but it’s a good place to start.

Source journey (1) and deployed journey (2)

For Data Extensions, the snapshot is limited to the fields, while the contained records are not exported. But in my humble opinion I think this is right, because we have privacy constraints and often the data in a test BU is different from that used in production.

A few months later the Deployment Manager added a partial support for deploy automation and, in certain cases, also for the related activities contained.

For example, if the automation contains a query activity, the extracted JSON file will also contain the information to generate the activity itself and the corresponding destination DE. While, unfortunately, in the case of “send emails”, neither the related activity nor the associated creativity will be created, but we’ll only see the placeholder in the automation step. Fortunately, the import report is quite descriptive and allows you to obtain useful information on the individual elements copied.

Import report

Finally, the Deployment Manager supports the possibility to export attribute groups of the data model.

In this case the deployment process seems to be quite complete, because both the attribute sets and the necessary DE are automatically exported.

But this is only the beginning clearly.

With a look to the near future, hypothesis predict about content builder folder organization, asset and the completion of existing journey features.

HOW DOES IT WORK?

Well, it’s very simple.

Once installed through AppExchange in both BUs, the source and the destination one, Deployment Manager allows you to export the configurations you want to copy by creating a JSON file that can be easily downloaded.

This snapshot contains the metadata of the journey, DE, attribute group, or automation without any customer data or campaign data. The exported file can therefore be easily re-imported into the target business unit, even in different accounts.  

Create the snapshot and import it in the new BU

Wait, wait..wait!

Did you just say also in different accounts? Yes, even in BU belonging to different marketing cloud enterprises.

And want to know another amazing thing?

We can export / import multiple configurations at the same time, in the same snapshot, with few clicks!

It is possible to export / import multiple configurations at the same time

So, even though there is still a bit of stuff to fix / add, it’s clear that this app allows all customers to decrease the effort needed for production deployment processes, while also minimizing the risk of error during this phase.

Deployment Manager is also a useful ally for Salesforce partners, because it allows you to easily recreate configurations previously implemented in other environments in the customer’s account and you can also store a snapshot for templating, backup, auditing, or version control.

And the audience quickly noticed it.

Deployment Manager was one of the highest rated sessions in the Partner Lodge at Dreamforce 2019 and it’s also the 2nd most downloaded Salesforce app which allows partners to streamline account creation, migration and setup.

Believe me, we will hear more about it in the next months!

For more information:

“Salesforce Advanced Administrator Certification Guide” made it to the Best New Salesforce eBooks to read in 2020

BookAuthority Best New Salesforce eBooks

I’m happy to announce that my book, “Salesforce Advanced Administrator Certification Guide: Unleash your Salesforce administration superpowers with an advanced training certification guide”, made it to BookAuthority’s Best New Salesforce eBooks.
BookAuthority collects and ranks the best books in the world, and it is a great honor to get this kind of recognition. Thank you for all your support!
The book is available for purchase on Amazon.

Page 8 of 24

Powered by WordPress & Theme by Anders Norén