Nerd @ Work

When Salesforce is life!

The big burnout – How COVID-19 is accelerating the Salesforce skills gap

Nabila Salem is on the Board of Tenth Revolution Group, and as President of Revolent Group is responsible for leading on the creation of talent, specialising in Salesforce and AWS. With over 15 years of experience in professional services, tech recruitment and marketing in the UK and USA, Nabila was the first and youngest female to be appointed to VP at the FTSE 250 company she used to work for. She is passionate about creating talent and plays an active role in encouraging, supporting and promoting diversity in the workplace. Nabila was recognised in Management Today’s 35 Women Under 35 List 2019, and most recently in Computer Weekly’s Most Influential Women in UK Tech.


Recent research from the 2021 Mason Frank Salesforce Salary Survey shows that the acceleration of digital transformation triggered by COVID-19 is putting increased pressure on workers, creating the perfect conditions for employee burnout and mass exodus from the workforce. So what should organisations be doing to address this alarming trend, head on?

The data shows that, prior to the pandemic, only 27% of professionals regularly worked outside of their contracted hours. Post-pandemic, this number has rapidly grown to 42%.

It also shows how the number of employees who had never worked outside of their contracted hours is shrinking quickly, going from 10.5% prior to the pandemic, to just 7.28% post-pandemic.

The boom in demand for tech has seen many companies prosper (41% of companies were hiring new IT staff during the pandemic, with a further 62% planning to add more before 2022). But,  without intervention, the added pressure on existing employees may result in increased burnout and growing attrition rates.

Why the Salesforce skills gap matters

These new statistics should be of great concern for our sector, particularly for Salesforce stakeholders. As far back as 2018 tech already had the highest turnover rates of any industry, at a staggering 13.2%, so anything that looks to grow that number is a real problem.

As the tech skills gap grows, and experienced but overworked employees leave the sector, the war for talent will get much worse. Businesses of all sizes, not just the small ones, will struggle to afford the people they need to help them thrive.

Of course, the consequences of the skills gap going unfilled are already well documented. The Deloitte and the Manufacturing Institute’s Skills Gap Study cautioned of economic output losses in the US of up to $454 billion by 2028 if the skills gap is not closed. And, according to Airswift, the US talent crunch is currently at a 10 year high, which could cost up to $162 billion if unresolved.

How to close the Salesforce skills gap

As a talent creation organization that specializes in creating net new Salesforce and AWS talent, we believe there are measures that can be taken to help prevent the impact of the growing skills gap, and even start to close it.

The first thing you have to realize is no one company or organization can do this alone. It will take a concerted effort from individual organizations, software providers such Salesforce, and talent creation companies such as ourselves.

To do this, you have to work the problem from both ends. First and foremost, we need to shrink our attrition rates as a sector, which means reducing things like employee dissatisfaction, burnout, and start placing a high value on employee wellbeing.

At the same time, organizations need to rely less on traditional hiring methods and accept more candidates from non-traditional career paths. Not every employee needs to be an Ivy League grad!

Finally, we need to improve diversity within our sector, and allow more people from different backgrounds to enter, progress, and stay within tech, if we have any hope of bringing in new talent at the scale we need to solve the skills gap.

Closing the skills gap with strategic diversity initiatives  

Historically as a sector, we have employed a very narrow approach to our hiring. We’ve over-relied on traditional hiring positives like grades expected or universities attended. And it shows – our sector has the worst diversity stats of almost any industry.

Not every Salesforce candidate has to be a STEM graduate and, if we only look for these people, we’ll never close the gap. People who have self-taught, upskilled, or gained experience but not qualifications are just as valuable, professionally speaking, as the ‘more traditional’ applicant.

Especially when you consider how Salesforce in particular is highly accessible through non-traditional pathways (with Trailhead, anyone can learn Salesforce), only taking on STEM grads seems unnecessarily reductive.

Beyond this, we also need to look at our processes for acquisition. In particular, we need to examine the things that stop us bringing on a more diverse range of candidates. To do this, businesses need to rethink their workforce planning strategies. While traditional recruitment channels are great for some hires, they should not be relied on exclusively. Instead, look to work with suppliers who have a focus on diversity and inclusion and broaden the range of places you source talent from, be it returners programmes, apprenticeships, or talent creation companies like Revolent.

Ultimately, we cannot let burnout deplete our sector of the valuable employees that we already have. As a collective, we need to support our existing talent and focus on closing the skills gap, in order to relieve pressure and guarantee a healthy, future-proofed pipeline of skilled workers.

ABOUT REVOLENT GROUP

Revolent Group, a division of Tenth Revolution Group, specializes in creating talent that can thrive within niche technology markets, including Salesforce and AWS. We recruit, cross-train, place and develop talent for those ecosystems, fuelling the market with the next generation of certified professionals in cloud technology. With hubs in Australia, the US, UK, and Canada, Revolent offers a truly global solution to the lack of talent in the industry.

For more information, visit: www.revolentgroup.com

Forceea 2021 User Meeting (it’s free!)

Forceea (https://github.com/Forceea/Forceea-data-factory) is the most powerful and sophisticated native data factory for Salesforce, and it’s open-source!

What

📣 This is the 1st Forceea User Meeting (free online event).

When

🕓 Saturday, July 10, 4 PM (UTC).

Agenda

▶️ Meet other Forceea users.

▶️ See new features of the next release (v2.5).

▶️ Learn advanced techniques.

▶️ Showcase your own code.

Data Integration between two Salesforce Orgs using Talend

This post has been baked by Akashdeep Arora, founder of Founder of #BeASalesforceChamp and #MakingChampion, 8X Salesforce Certified, #LightningChampion, 6X Trailhead Ranger, 5X Trailhead Academy Certified, #SalesforcePartyAnimal #SalesforceTravellerGeek


Greetings Trailblazers! Many developers asked this question: how to integrate two Salesforce orgs without any custom code.

Here is the quick way you can use Talend Open Studio for Data Integration which will just use drag and drop functionality to transfer your records from one Salesforce org to another.

Talend is an open source data integration platform where you can integrate between different platforms and it offers 800+ connectors and components to perform several options.

Let’s just walk you through quickly to make you familiar to it.

Steps for Talend Integration

  • Launch Talend Studio.
  • Select the Create a new project option and enter a project name in the field.
  • Click finish to create the project and open it in the Studio.

Create a job

  • In the Repository tree view of the Integration perspective, right-click the Job Designs node and select Create job from the contextual menu.
  • An empty design workspace opens up showing the name of the Job as a tab label.
  • The Job you created is now listed under the Job Designs node in the Repository tree view. You can open one or more of the created Jobs by simply double-clicking the Job label in the Repository tree view.

Centralizing Salesforce metadata

The Salesforce metadata wizard provided by Talend Studio to set up quickly a connection to a Salesforce system so that you can reuse Salesforce metadata across Jobs.

  • In the Repository tree view, expand the Metadata node, right-click the Salesforce tree node, and select Create Salesforce from the contextual menu to open the Salesforce wizard.
  • Enter a name for your connection in the Name field, select Basic or OAuth from the Connection type list, and provide the connection details according to the connection type you selected.

With the Basic option selected, you need to specify the following details:

  • User Id: the ID of the user in Salesforce.
  • Password: the password associated with the user ID.
  • Security Key: the security token.
  • The newly created Salesforce connection is displayed under the Salesforce node in the Repository tree view, along with the schemas of the selected modules.
  • You can now drag and drop the Salesforce connection or any schema of it from the Repository onto the design workspace, and from the dialog box that opens choose a Salesforce component to use in Job.

Mapping data flows

Mapping components are advanced components which require a more detailed explanation than other Talend Open Studio Components. The Map Editor is an “all-in-one” tool allowing you to define all parameters needed to map, transform and route data flows via a convenient graphical interface.

You can minimize and restore the Map Editor and all tables in the Map Editor using the window icons.

tMap operation

All these operations of transformation and/or routing are carried out by tMap, this component cannot be a start or end component in the Job design.

tMap uses incoming connections to pre-fill input schemas with data in the Map Editor. Therefore, you cannot create new input schemas directly in the Map Editor. Instead, you need to implement as many Row connections incoming to tMap component as required, in order to create as many input schemas as needed. The same way, create as many output row connections as required. However, you can fill in the output with content directly in the Map Editor through a convenient graphical editor.

The Map Editor requires the connections to be implemented in Job in order to be able to define the input and output flows in the Map Editor. You also need to create the actual mapping in Job in order to display the Map Editor in the Preview area of the Basic settings view of the tMap component.

How to run a Job in normal mode

  • Click the Run view to access it.
  • Click the Basic Run tab to access the normal execution mode.
  • In the Context area to the right of the view, select in the list the proper context for the Job to be executed in. You can also check the variable values.
  • If for any reason, you want to stop the Job in progress, simply click the Kill button. You will need to click the Run button again, to start again the Job.

Step to Schedule Job

Open up Talend Open Studio.

  • Select the job you wish to automatically run based on a schedule.
  • Right-click its name in the Repository tab.
  • Select Build Job option.
  • In the pop-up window select where you would like to save the archive.
  • Select the version of the job, if you have multiple versions.
  • Make sure that the build type is set to Standalone Job.
  • Tick Extract the zip file (You will need to extract the archive anyway).
  • Click Finish.

Once this job is extracted, you can schedule it to run on the server in order to automate the job on timely manner.

You can play around with different operations like tMap or tlogRow or tSendEmail as per your need.

To just summarize in a quick way, you just need tInput, tMap and tOutput. Just play around on these operations with insert, update or upsert and your data would be transferred from one org to another in just a game of minutes.

If somebody offers you an amazing opportunity but you are not sure you can do it, say yes – then learn how to do it later!

#BeASalesforceChamp

Automatic export tool for Salesforce Data Export backups

TL;DR
Jump to GitHub for the complete repository: https://github.com/enreeco/sf-automatic-data-export-script/

Have you ever had a close relation with the Salesforce Data Export feature?

It’s a way to periodically export all Salesforce data set in zipped CSV files, including files and attachments.

You can do a one-shot export or schedule it on monthly (available on Developer Edition orgs) or weekly (available on Enterprise, Performance, and Unlimited Editions only).

The one-shot and periodic export configuration is straightforward:

  • Select the file encoding
  • Select which data you want to export (including files and content can increase export size)
  • Select a schedule (for monthyl or weekly export schedule only)
  • Select all or a subset of the available Salesforce objects
Monthly Data Export configuration schedule

What’s the outcome?

You’ll come up with a set of zipped files with a size up to 512 MB, containing Salesforce extracted files (if checked in configuration) or CSVs grouped by Salesforce objects, as shown below:

The struggle of downloading

What if you have plenty of files and want to automatically download them one-shot without having to click link by link?

Unfortunately there are no Salesforce standard APIs that you can use to automate the export and the only way was to go by script by getting all download links and triggering each download on a local folder (or remote storage if you are brave enough).

I thought there was already a solution out there but as far as I know there wasn’t anything.

The script

I decided to implement a script in NodeJS that:

  1. logs in to Salesforce with a full powered user
  2. opens the Data Export page
  3. looks for the download links (if any)
  4. triggers downloads one by one, putting them on a local folder

This way you can continue doing other tasks while the scripts runs.

DISCLAIMER: the script has been written in a quick & dirty style, so please don’t tell me it’s ugly, it gets you to the point!

Download it from GitHub: https://github.com/enreeco/sf-automatic-data-export-script

These are the simple steps:

  1. Install NodeJS and NPM if haven’t already (you just have do donwload the installers, follow this guide but you’ll find tons online)
  2. Open a console and install Foreman with:
    npm install -g foreman
    An alternative is to use the Heroku command line with:
    npm install -g heroku
  3. Install all required packages with command line npm install
  4. Rename the .env-local into .env and replace the environmental variables with a local path (where the files will be stored), the login URL, your username and the password+token
  5. Run your script with alternatively:
    nf start
    or
    heroku local

You’ll see the script running and the files magically will drop on the selected folder:

Automatica Data Export script execution

Have a nice Salesforce day!

Key Findings from the Mason Frank’s Salesforce Salary Survey 2020/21

2020 has been a year of change. The pandemic has had a devastating effect on many, and its side-effects have re-shaped the way we live, communicate, learn and ultimately, the way we work. The Salesforce ecosystem hasn’t been an exception. It’s hard to imagine what the future will look like, but it’s worth having a look at the trends that have shaped the Salesforce universe during these past months if we want to be as prepared as possible. This is why it’s a good time to have a look at Mason Frank’s Salary Survey – the largest independent Salesforce market report worldwide. Mason Frank International is a global leader in Salesforce Recruitment, and their yearly study gives us independent insights into the latest market trends and salaries across the ecosystem. The report delves into topics such as how professionals feel about their jobs and employers, work perks, certifications and diversity, and also looks at salaries in different roles globally. Here are some key findings from the report.

Experience vs education

Let’s start off with something of an eternal dilemma – when it comes to employability, which is more valuable, experience or education? If you’re looking to increase your earning potential as a Salesforce professional, experience seems to be deemed essential, with 90% of survey respondents naming it as the most important factor. That, together with exposure to large projects and Salesforce certifications, seem to be the top-ranked aspects that increase your earning potential. 

In contrast, having a university degree is considered important by just half of the survey’s participants. Formal education can lay the groundwork for a range of skills—communication and problem-solving just to name a couple—but with Salesforce being such a broad, evolving industry, experience and product knowledge seem to be better indicators of whether or not a candidate is suited to a particular post. 

Which Salesforce certifications will increase your pay? 

We’ve mentioned certifications being an important factor for career progression, but the real question is: which certifications are most likely to help with development and earning potential? The Technical Architect certification tops the Mason Frank Salary Survey list, with Salesforce professionals considering it to be the certification most likely to boost your pay for the second year in a row. 

This qualification is still very much a rare one within the ecosystem, making it highly sought-after by employers across the globe. This certification shows the depth and breadth of a candidate’s Salesforce knowledge and demonstrates the ability to deliver optimized solutions across the entire platform. The qualification is intense, and requires some serious commitment and investment, but as with any challenge, it’ll yield rewards if you put the work in. 

Let’s talk perks

We usually think of salary as one of the most significant factors affecting a candidate’s decision at that all-important offer stage. However, employers and job seekers alike should not underestimate the value of employee benefits. 

Many of the benefits enjoyed by Salesforce professionals, according to Mason Frank, are either the ones supporting employees outside of the workplace, such as health and medical insurance, and retirement savings plans, or perks aimed at improving that coveted work-life balance, such as homeworking or flexible working. Other perks topping the lists are training and development opportunities, and naturally, bonuses. The value associated to each of these perks depends on many factors—but making sure your employer offers a robust benefits package as well as competitive salary will truly pay off. 

Working from home 

What was previously considered a more of a perk has become more or less the default following the coronavirus pandemic. Pre-pandemic, 21% of permanent professionals who took part in the Mason Frank Salary Survey worked from home on a full-time basis, while 62% worked from home at least once a week. These both increased during the pandemic, with 84% working remotely full-time, and 97% working from home at least one day a week. 

Remote working definitely comes with its own set of pros and cons, and anyone currently experiencing it may have their own thoughts and concerns. However, what the remote working boom has surely done is open up roles to new, more diverse hiring pools, which is good news for anyone looking for a job and great news for employers looking to hire Salesforce talent in such a competitive market. 

Salesforce Salaries

We’ve spoken about how to maximize your earning potential, but how much are Salesforce professionals actually earning? Compensation benchmarking is beneficial to job seekers as it helps them gauge whether or not their salary is on par with their qualifications, skills, and experience, allowing them to make an informed decision when looking for fresh opportunities. 

It’s also interesting to look at salary benchmarking when considering re-location. Evaluating job proposals abroad can be quite tricky when you’re not sure if the salary on offer matches up to the standard of living, or whether it really is competitive in that country. For instance, a junior functional permanent consultant’s salary starts at an average of €23,000 in Italy, while that same role starts off at €48,000 in Germany, €35,000 in France and €47,000 in Ireland. It’s also worth looking at salary benchmarking if you feel like you haven’t seen a salary increase over some years, or if you’re not sure that increase matches up with your years of experience, qualifications, and ultimately, the current standard of living. For instance, the same junior functional permanent consultant salary started at €20,000 last year – an increase of €3,000 in the Italian market over just one year.

The Mason Frank Salary Survey 2020/21 is an excellent resource to learn all about the salary and benefits Salesforce professionals expect and receive today. It’s also packed with useful tips on how to maximize your earning potential as a Trailblazer, bringing you that one step closer to your dream job. Download the full report and get the most current snapshot of the Salesforce Ecosystem. 

What writing a (Salesforce) tech book means: my experience

Almost exactly 1 and a half year ago I’ve been contacted by Alok Dhuri from Packt Publishing asking me if I was interested in writing a Salesforce guide.

At that time I still was a Salesforce MVP and, on my career’s checklist, I missed the authoring experience.

Since I was a child, writing a real book has been one of dreams: the only problem is that I’ve never been an artist, so writing a novel have never been an option (although I really REALLY want it was).

It’s at the age of 27, after my MsC degree, I tried to write a PHP related book for newbies: as a self-taught programming learner (I took an Electronic Engineering MsC but I learned programming all by myself), I really love to help others to achieve knowledge with less effort.

That book never saw the light, although I still have the draft on my archives (I lost the digital copy but still have a printed copy).

In 2009 I joined WebResults as a junior Salesforce developer and in 2013 I started Nerd @ Work blog with a cool technical post about a Salesforce workaround that had, and still have, much appreciation on the community.

That was the time I understood that I had enough knowledge to share to the world: it was an important step in my career, because I finally understood that, although I’ve always been a humble guy, I could give and help people just by telling them what my experience taught me. Post by post, challenge by challenge, Nerd @ Work became a known blog among the Salesforce Ohana community.

Busy on my daily work, side projects, ORGanizer for Salesforce and, recently, on authoring 2 books, I started getting help from the Ohana with awesome guest blog posts, but I try to write as much as I can.

The first book: let’s start with advanced stuff first

Although I really wanted to write something for newbies, the guys from Packt Pub. suggested me to write a guide about Salesforce Advanced Administrator certification, which I took as an amazing opportunity…after all I haven’t ever written a book, challenge accepted!

After almost 6 months, the book was out on the book shops and I had an amazing blast when I saw it on the Dreamforce 2019 book shop (picture below).

Salesforce Advanced Administrator Certification Guide by Packt Pub. at Dreamforce 2019

Next book please!

Writing Salesforce Advanced Administrator Certification Guide was a blast, but it was an advanced book and I knew it couldn’t become a best seller.

Unfortunately few months after the publication, on March 2020 I lost my Salesforce MVP status, which honestly made me feel down regarding my Salesforce Ohana involvement: I didn’t understand why, even after publishing a book, hosting my blog, running a well known browser extension used my thousands people, the status was not renewed but, after the first days of sadness, I thought that it was just a new challenge for me.

Fortunately, on the same March 2020, Alok came back with the title I was looking for: Hands-On Low-Code Application Development with Salesforce.

Finally a book for newbies, where I can try to introduce people to our beloved technology, speeding up their involvement with Salesforce, trying to help companies with an heavy shortage of Salesforce professionals.

The pandemic was striking across the world and a psychologically heavy lock-down hit Italy between March and middle May 2020. we lost a dear friend, Steven, that’s why I decided to dedicate this new book to him and all other Codiv19 victims.

I didn’t have much free time as I though home working could bring, so keeping in time with chapter schedule has been hard during the past months: a mean of 2-3 chapters per month, should have brought the book to life in November 2020 and, luckily, we managed to end at the beginning of October, anticipating by one month…not bad!

Hands-On Low-Code Application Development with Salesforce by Packt Pub.

But how does writing a technical book work?

The schedule

The first step needed when writing a book is the Table of Contents (TOC) creation: what we’ll be talking about?

I usually use a personal knowledge tool (such as Atlassian Confluence) to host these files, so I can quickly update them by accessing them whenever I need from any device.

The TOC is not definitive and it is possible to change chapter order or even chapter descriptions; indeed this is the final approved TOC:

  1. A Brief Introduction to Salesforce
  2. Building the Data Model
  3. Mastering Formulas
  4. Cleaning Data with Validation Rules
  5. Handling Dynamic Configuration
  6. Security First – The “Who Sees What” Paradigm
  7. Be a Workflow Champion
  8. Setting Up Approval Processes
  9. Process Builder – Workflow Evolution
  10. Designing Lightning Flows
  11. Interacting with Actions
  12. All about Layouts
  13. The Lightning App Builder
  14. Leveraging Customers and Partners Power with Communities
  15. Importing and Exporting Data Declaratively
  16. Learning about Data Reporting
  17. The Sandbox Model
  18. Deploying Your Solution
  19. Salesforce Ohana – The Most Amazing Community around

For each chapter you need to provide:

  • expected page count
  • chapter extract
  • learning objectives

To keep up with the schedule I literally printed out a calendar for the next months so I always had the whole schedule on sight range, as shown below.

Each chapter has a first draft release date when the guys at Packt Pub. reviewed all the content in terms of English grammar, chapter structure and all not technical stuff: I REALLY want to thank Prajakta Naik and Tiksha Abhimanyu Lad for surviving my awful English writing!

After one or two review iterations, each book is then evaluated by a technical reviewer: I’ve been supported the whole time by my Ohana friend Fabrice Cathala, who happily joined the team and helped me in tweaking and increasing coherence in the narration on the chapters content with his vast Salesforce knowledge as a prominent Salesforce technical architect and evangelist.

If you plan to write a book, be aware that you may find yourself stuck with a new draft to write, an editor review to check and a tech review to finalise: and this is not your only job!

Time management is essential, you made a commitment and, if you are like me, you REALLY want to keep your word and finish what you started!

Pay attention to…

  • Check your page count: I have a tendency to write too much
  • Balance content depth versus page count: depending on the audience you are talking to, try not to write too much and simplify the explanation
  • Follow a coherent narrative style: it is your book, choose your style and don’t be afraid to adopt an informal writing…I love to put some humour (even if a tech book is not the perfect place to tell a joke!)
  • Use external references: there’s a plenty of stuff on the net, avoid copy&paste of tables or lists, simply add a reference / highlight box with a link to the external resource where the reader can read further details
  • Take good screenshots: save with good resolution and avoid typos (I’m known for writing tons of typos…). I suggest to save pictures on a dedicated folder (one per chapter) so, if you ever need to make some modifications, you have the original version
  • Take a note of each step in your examples: if your book has examples, take notes of any configuration/customisation, you may need to execute the same steps again in the future if you need (for example) to take another screenshot and, believe me, after few months from that writing you may forget what you were doing
  • Not forget the final goal: during the writing you may find weeks where you believe you want to give up, you may be stressed, but remember that this is pretty normal, it is the so called writer’s block, and if you are not an experienced author, well…soon or later you’ll fill this awful feeling

Finally the publication

But at the end of your journey finally the book gets published: this is an amazing feeling and now you have to wait patiently to see reviews coming from all around the world, hoping that the efforts you did to write those hundreds pages have been worth a bit at least, and maybe helped someone in achieving some knowledge.

I really love the feeling of taking a copy of my own book, turn the pages, and randomly read an sentence and check if I’ve been clear enough.

My free copies of the book, a cool gift from the publisher

Writing a book is an interesting and formative journey, if you believe you have something to tell the world, start a new authoring project, think of a cool title, plan your content and start writing: believe me if I tell you this is not a waste of time!

If you want to start a Salesforce career give my book a try and let me know if you enjoyed it!

Comparison between Salesforce Omni-Channel and Round Robin Distributor for Salesforce

Nandini is currently working with Mirketa as a Product Manager aiming to provide strategic and innovative solutions. She specializes at salesforce sales cloud & admin providing support in product marketing, client interface and aspects of project management. Alongside, enjoys writing blogs on Salesforce Sales Management, and shares her experience in product management on Medium. Connect with her on LinkedIn for related conversation and insights!


Omni-Channel is a declarative tool that routes work based on queues to help assign work items to available agents in real-time. The assignment uses routing models based on the most available and qualified support agents and sales reps in your Salesforce console.

Why Salesforce Introduced Omni-Channel?

  • Omni Channel routing helps business admins push workload (work items in Salesforce) to the agents in real time thus optimizing service response time.
  • With a high ticket (support request) volume incoming, the agents need to prioritize what is of high priority to business, and Omni-Channel solution allows Admins and Supervisors to set routing priorities to work items via Secondary Routing enablement.
  • Shorten the average resolution time for online customers through Live Chat by listing it as a priority to prevent delay in responses to live customers on your website.
  • Track ticket volume and reason of ticket declines by setting up Omni-Channel Supervisor in your Salesforce org and watch handle times tick by the second, and average wait times change as agents accept and close their work.
  • Weighted allocation and capacity management by assigning higher load to more experienced and lesser to the new joinees to avoid the burnout it could cause queuing up everything to the very best agents only.
  • Skill based routing to the agents based on the attribute associated with the work item on a real-time basis.

How can you set up Omni-Channel for your Salesforce org?

If you are just starting to set-up omnichannel experience, here is the list of tasks you would be required to perform:

Step 1: Type in Omni in the Quick Find box of the Setup Menu. In the options, click Omni-Channel Setting and check ‘Enable Omni-Channel’ and Save

Step 2: Click on Service Channels under Omni-Channel Setup, and click on New

In here, Salesforce Admins can define the Salesforce Object they want to associate the routing with as well as secondary routing configuration priority

Step 3: Create Presence Configurations and set Presence Statuses

Set up Presence Statuses which is easy to use and understandable by support agents

In this step, the Admin can define the intake capacity for an agent, setting up statuses as the agent declines or agent’s presence status on time out. User can define the users and profiles

Step 4: Specify the manner and order in which Salesforce will automate the routing of records for your business. The model determines which agent will be selected from the list for routing.

Step 5: Create Queue and Define Configuration with Omni-Channel

Step 6: For the final step, add Omni-Channel to the Utility bar of the lightning app

Go to Setup and type in App and select App Manager. Choose the app in which you would want to enable Omni-Channel and click on Edit on the drop-down towards the right.

In the general settings, I will select Service Console as I set it up for my Customer Support Team and then as the new window opens. From there, select Utility Items from the vertical navigation pane and add Omni-Channel from the items.

Since, now you are all set to get started, as a Manager, you ready to connect the right information to the right people at the right time too.

When to not use Salesforce Omni-Channel?

This salesforce out-of-the-box functionality does come up with some limitations which makes it complex to use with other integrations and organizations looking to use advanced routing solutions for their teams

  • In a help article by Salesforce, it is mentioned that when Omni-Channel agents are online and see an error “LIMIT_EXCEEDED, limit exceeded”, it is possible that the queue has hit the maximum limit of 200000 records. New work items will not be added to the queue or routed to agents, until the volume has lowered.
  • According to this Salesforce Article, If a work item requires certain skills, but no agents have those skills, then the work item isn’t routed.
  • Omni-Channel is not capable of assigning records to the specific agent or rep based on past relationships with the clients based on the attribute of Salesforce incoming leads, cases, accounts, opportunity and any custom SFDC object
  • A record cannot be re-assigned to another queue if the case has not been owned within the specific wait time
  • If the capacity of volume assigned to an agent is exhausted, high priority cases or hot leads couldn’t be assigned to the best available rep which impacts the business throughput

Recommended Salesforce App- Round Robin Distributor for Salesforce

Round Robin Distributor (RRD) app overcomes the limitations of Salesforce Omni-Channel and optimizes and simplifies complex routing logics for your Sales & Service Teams

RRD tool is a robust, highly customizable, and open-platform solution for your distribution automation solutions and integrates with your marketing and sales automation apps like Marketo, Hubspot and even with existing Salesforce Lead Assignment rules and Salesforce Case Assignment rules.

Round Robin Distributor addresses the following concerns for you:

  • Role based Assignment- Assign lead to multiple people working at various positions automatically by defining rules for distribution. For Instance, in Education Industry, you might want to assign Program Managers, Financial Advisor, Academic Advisor along with a Case Owner/ Contact Owner on a student which can be round robin’ed with Round Robin Distributor by simply defining the criteria for allocation.
  • Re-assign Cases to another Queue- If everyone in the selected Queue has either declined the case, capacity is exhausted or unavailability of agents, then the cases are routed to the queue with the second highest priority matching the rules logic defined
  • Priority based handling of users and queues – Through Round Robin Distributors, Administrators can define priority to case teams, users and even queues based on which the inbound record can be routed to the best available agent
  • Distribution of leads and cases to the queue- Leads & Cases can be round robin’ed between multiple queues based on the defined attribute for the incoming leads and cases through any integration or manual upload of inbound data.
  • Criteria based routing to queues – Complex AND & OR logic can be used between various attributes and corresponding values can be defined in RRD Teams for efficient allocation for organizations with complex team structures.
  • Ability to handle High-priority cases even if the Capacity is exhausted – For organizations, customer experience is of high value and to cater to high value cases, the agents should be able to respond them on priority even if the limit has been exhausted.
  • Round Robin routing method
  • Eliminates the issue with Triggers and Workflow rules not working on updating by Omni-Channel
  • Handling relationship-based assignments- Useful for routing leads and cases as you want to assign them to the same person working with another case/lead from the same client.

To know more about the benefits of the app and designing of architecture of distribution logic functions and intuitive real-time routing, do visit our page at https://www.roundrobindistributor.com/

Introducing an exclusive referral program this fall, for fellow partner firms and solution architects. Distributing a small token of appreciation for every successful referral. To know more mail us at [email protected]

Who needs so many records?

Today’s post has been written by Nikos Mitrakis, the creator of Forceea, an amazing Data Factory Framework for Salesforce.
Some facts about Nikos:
– Salesforce Developer at Johnson & Johnson EMEA Development Centre (EDC)
– Started his Salesforce journey in 2014
– Has passed 13 certifications, including Application & System Architect
– Holds a Physics degree
– Married since 1994, has a daughter
– Loves watching sci-fi movies and good comedies
– Lives in Limerick, Ireland


A first question you probably have when you read about creating millions of records is “Who really needs to create millions of records?” Sometimes it’s not “millions”; it’s anything between a few thousands to hundreds of thousands of records. But the need is the same: a flexible tool that can insert (and delete of course) many SObject records and will allow:

  • Companies of any size create sandboxes for User Acceptance Testing (UAT).
  • AppExchange ISV/Consulting partners create orgs with sample data for demos or for a realistic simulation of their app.
  • Testers or business users generate their testing data in a sandbox.
  • Architects create Large Data Volumes (LDV) for stress testing of their designs.

Forceea overview

Forceea data factory (a GitHub project) can create data using the Dadela data generation language. The framework can insert/update records synchronously for test methods (or for inserting a few hundreds of records) in your org, but it can also insert/delete records asynchronously.

Forceea has a rich set of powerful data generation tools and it’s the most sophisticated data factory for Salesforce. The latest release adds variables, permutations of serial values and the first function-x definition.

I can hear you asking: “How complex (or “difficult”) is to create records with Forceea asynchronously? Should I know to write code?

The answer is “Yes, you should write a few lines of Apex code. But, NO, it’s not difficult at all!”. Sometimes the data creation is complex because we must have a deep knowledge of how our SObjects are related to each other, but this doesn’t need advanced programming skills.

So, what is needed to start working with it?

  • A Template.
  • An anonymous window to execute Apex scripts.
  • A Lightning component to monitor the progress.

Let’s start with..

The Template

In my previous article How to create an Apex reusable Data Factory Library using Forceea Templates, we had constructed some Templates using an older version of Forceea. The good news is that Forceea now inherently supports Templates, so the Template creation process is simpler.

What is a Template

A Template will not create data; it’s a “description” of the structure of the data we want to create.

When we construct a Template we define:

  • The SObjects that will be created.
  • The number of records of each SObject.
  • What fields will be populated.
  • The structure of field values.

A Template is a Map<String, FObject>, so our Template will start with the initialization of this Map:

Map<String, FObject> template = new Map<String, FObject>();

Defining what data we need

Before starting our Template we should have a good understanding of the SObjects and fields we need, what are the relationships between the SObjects and what data we want for each field.

Here are our (hypothetical) requirements:

Accounts

  • Record type: the record type with name MajorAccount.
  • Name: Account-1, Account-2, etc.
  • Industry: any picklist value except Banking and Services.
  • AnnualRevenue: a random integer number between 1M and 10M.
  • Rating: any picklist value.
  • Type: any random value between Prospect, Customer and Analyst.
  • Shipping address: any (real) address from U.S.

Opportunities

  • Record type: the record type with name BigOpp.
  • Name: <Account> – <text>, where <Account> is the name of the related account and <text> is a text of random words between 20 and 40 chars.
  • Amount: a random number between 10K and 1M, rounded to nearest 100.
  • StageName: any picklist value except Closed Won and Closed Lost.
  • Type: New Business.
  • CloseDate: any date between 1 Jan. 2020 and 30 June 2020.
  • AccountId: the 1st account to the 1st opportunity, the 2nd account to the 2nd opportunity and so on. If we have no more accounts, start from the 1st account, then to the 2nd, etc.

For every 1 account we’re going to create 10 opportunities.

The template for accounts

First, we “add” the Account definitions in our template:

template.put('Accounts', new FObject(Account.SObjectType)
  .setNumberOfRecords(10)
  .setDefinition(Account.Name, 'static value(Account-)')
  .setDefinition(Account.Name, 'serial type(number) from(1) step(1) scale(0)')
  .setDefinition(Account.Industry, 'random type(picklist) except(Banking,Services)')
  .setDefinition(Account.AnnualRevenue, 'random type(number) from(1000000) to(10000000) scale(0)')
  .setDefinition(Account.Rating, 'random type(picklist)')
  .setDefinition(Account.Type, 'random type(list) value(Prospect,Customer,Analyst)')
  .setDefinition(Account.ShippingStreet, 'random type(street) group(shipping)')
  .setDefinition(Account.ShippingPostalCode, 'random type(postalCode) group(shipping)')
  .setDefinition(Account.ShippingCity, 'random type(city) group(shipping)')
  .setDefinition(Account.ShippingState, 'random type(state) group(shipping)')
  .setDefinition(Account.ShippingCountry, 'random type(country) group(shipping)')
);
  • The order of the field definitions is important! Forceea generates the values for the first field definition, then for the second, etc.
  • The Name field has 2 definitions. The first generates the same (static) value “Account-” and the second serial numbers (1,2,3,..)
  • We “grouped” all address definitions in order to “link” the correct street to the correct city, postal cod, etc.
  • If we had a Billing address, we could copy the value from the Shipping, e.g. setDefinition(Account.BillingCity, 'copy field(ShippingCity)')

The Template for opportunities

Now we are going to set the Opportunity definitions:

template.put('Opportunitites', new FObject(Opportunity.SObjectType)
  .setNumberOfRecords(100)
  .setDefinition(Opportunity.AccountId, 'serial lookup(Account) mode(cyclical) source(forceea)')
  .setDefinition(Opportunity.Name, 'copy field(AccountId) from(Account.Name)')
  .setDefinition(Opportunity.Name, 'static value(" - ")')
  .setDefinition(Opportunity.Name, 'random type(text) minLength(20) maxLength(40)')
  .setDefinition(Opportunity.Amount, 'random type(number) from(10000) to(1000000) scale(2)')
  .setDefinition(Opportunity.StageName, 'random type(picklist) except(Closed Won,Closed Lost)')
  .setDefinition(Opportunity.Type, 'static value(New Business)')
  .setDefinition(Opportunity.CloseDate, 'random type(date) from(2020-01-01) to(2020-6-30)')
);

The FObjectAsync class

Now we can proceed with the actual insertion of records. Our main tool is the FObjectAsync class.

How the async process works

When we insert or delete records asynchronously, Forceea uses Queueable Apex to execute one or more jobs. These jobs have some higher governor limits (e.g. 60,000ms total CPU time and 200 SOQL queries), which is definitely positive for our data generation needs.

If you think “I’m going to create x accounts and y opportunities”, forget this way. Forceea works with iterations! An iteration is the number of records (for each SObject) defined in the Template we use. Our template creates 10 accounts and 100 opportunities, so 1 iteration will create 10 accounts and 100 opportunities.

Another important detail is Partitioning, which has two parts:

  • Template: you define the Partition field for each SObject with the method setPartitionFieldName.
  • FObjectAsync: you define the Partition field value for all SObjects with the method setPartitionFieldValue.

The Partition field value should be a string which will identify (or “partition”) the inserted records. As a best practice, use a value with a few characters, even a single letter (uppercase or lowercase).

When inserting records, Forceea checks:

  • If there is a Partition field defined in each SObject.
  • If there is a Partition field value.

If both conditions are valid, Forceea will insert the value in the partition field of each record. So, let’s say that the Partition field for Account is ForceeaPartition__c and the Partition field value is df. In this case, Forceea will insert the value:
• df1 into the records inserted in Job 1.
• df2 into the records inserted in Job 2.
• df3 into the records inserted in Job 3.
etc.

Insert records asynchronously

Now we are going to insert 1,000 iterations, so we’ll insert 1,000 x 10 = 10K accounts and 1,000 x 100 = 100K opportunities.

Open an Anonymous Apex window and enter the following lines:

new FObjectAsync(template)
    .setNumberOfIterations(1000)
    .setNumberOfJobs(20)
    .setPartitionFieldValue('df')
    .insertRecords();
  • The default number of (parallel asynchronous) jobs is 30. Here we require 20 jobs.
  • The partition value is “df”.

Execute the code and then go to the Data Factory tab of the Forceea Lightning app.

  • In the Log panel Forceea displays information about the execution of each job.
  • The Messages panel contains an overview of the async process.
  • The Progress panel will let you know how many iteration have been inserted.
  • Finally, the Job Status panel displays a visual indication of the status for each job (black: pending, green: successful, red: failure, orange: terminated).

Forceea will follow this procedure during the async insertion process:

  • Benchmarks the operation by inserting 1 iteration in the first batch. The transaction is rolled back, so it doesn’t permanently insert any records.
  • Executes the second batch of any job, which creates and insert records of each SObject defined in the Template, with as many iterations as possible (remember the benchmarking).
  • If there are no errors and there are more iterations to be inserted, a third batch is created, and so on.
  • When all iterations assigned to a job have been inserted, the job ends with a successful completion.

When we have a serial definition, Forceea will insert the records without any gaps in the serialization!

Delete records asynchronously

The deletion process follows almost the same logic:

new FObjectAsync(template)
    .setNumberOfJobs(20)
    .setPartitionFieldValue('df')
    .deleteRecords();

Execute the above Apex code and then go to the Data Factory tab to watch the progress.

Forceea will follow these steps during the async deletion process:

  • Reverses the order of SObjects in the Template, so the last SObject will get the first position, etc.
  • If all SObjects in the Template have a Partition field and FObjectAsync has a Partition field value, a number of jobs are enqueued for parallel processing (each job will delete all records of different partitions), otherwise it enqueues only 1 job (no partitioning).
  • The deletion starts from the SObject in the first position, executing the first batch of each job, which benchmarks the transaction to calculate the maximum number of records that can be deleted in every batch. This first benchmarking batch deletes up to 200 records.
  • If there are no errors and there are more records to be deleted, a second batch is created after the completion of the first batch, and so on.
  • When all SObject records assigned to a job have been deleted, the job moves to the second SObject, etc.

Important: if Forceea finds in the Template a definition for the RecordTypeId field of an SObject, it will delete the records of this Record Type only.

Forceea will stop the execution of a job when an error is encountered, except from the errors related to record locking, where it will raise an error only after the 5th occurrence of the UNABLE_TO_LOCK_ROW error.

Using existing lookup records

Forceea will take care of all the complex orchestration of the asynchronous process. The parallel processing offers an advantage, but it’s based on the assumption that we won’t query any existing records from the database, otherwise we may have record locking.

For example, if we have a custom SObject Language__c and we have the lookup field Language__c on Opportunity, to get random IDs for this field we would use:

setDefinition(Opportunity.Language__c, 'random lookup(Language__c) source(salesforce)')

If the above definition raises the UNABLE_TO_LOCK_ROW error (unable to obtain exclusive access to this record), then your only option is to use 1 job only with setNumberOfJobs(1).

Conclusion

Nobody can say that data generation is simple or without issues. Under the hood, the data generation process is quite complex, but it shouldn’t be to the user; Forceea will gracefully handle all the complexity.

I strongly believe that an admin, a tester or even a business user, with no Apex knowledge, can insert/delete records asynchronously using FObjectAsync and existing Templates, which a developer or advanced admin could create.

You can find the code of the above scripts in Forceea-training GitHub repo. And don’t forget to read the Forceea Success Guide; it has a lot of examples and details.

Get Started with Salesforce Data Cleansing

Il’ya Dudkin is the content manager and Salesforce enthusiast at  datagroomr.com. He has more than 3 years of experience writing about Salesforce adoption, duplicate detection issues and system integrations with MuleSoft. He also works with IT outsourcing companies to facilitate the adoption of new Salesforce apps and increase user acquisition and loyalty. 


Simply getting started with cleaning up the data in Salesforce may be a daunting challenge especially for companies that have hundreds of thousands of records or even millions. It is important to know that even if duplicates are severely hindering your marketing and sales efforts, you can bring all of the issues you are having under control and improve the overall quality of the data. If you are like most organizations and feel like the data you currently have is preventing you from capitalizing on business opportunities, we have some steps that you can take today to start the process of data cleansing. 

Know Where Salesforce Falls Short

While your investment in Salesforce may be hefty, the deduplication functionality in the off-the-shelf product is fairly limited. For example, there is no way to conduct a cross-object duplicate search. This means that your new lead may already be in your contacts and vice-versa. Also, a lot of companies have custom objects beyond the standard Lead, Contacts, and Accounts and Salesforce by itself will not be able to check those for you. If you are working with large volumes of data i.e. hundreds of thousands or even millions of records, the duplicate jobs performed by Salesforce will not be enough. In fact, Salesforce itself admits this issue in the Trailblazer Community

Keep in mind that these are only some of the shortfalls of Salesforce’s built-in deduplication features. You can find more details about why the off-the-shelf product alone is not enough to catch all of the duplicates in this article. However, now that you are aware of the limitations of Salesforce in the deduping area, you will be in a better position to choose a third-party product that meets all of your needs. 

Choosing a Deduping Tool 

If you search the AppExchange for a deduping app, you will be inundated with lots of various products that all have their individual merits. However, each company has its own individual needs which narrows down the search results to just a handful of possibilities. There are a few things you need to consider when comparing products. First of all, look for something that’s easy to set up. One of the reasons that the built-in deduplication features inside Salesforce are not very effective is because they are rule-based. This means that your Salesforce admins will have to create a rule for each type of duplicate which can prove to be impossible if we think about the various shapes and forms of fuzzy duplicates. 

A much better approach would be to choose a tool that uses machine learning to catch the duplicates. This offers you several benefits. First of all, you already eliminate all of the issues and hassles of setting up rules since the algorithm will learn to identify future duplicates without explicitly programmed to do so. You are also simplifying the setup process since the product will be ready to use right away. The machine learning algorithms do the heavy lifting and all you have to do is append the field values for the master record. A lot of products also allow you to automate the duplicate checking process which is always helpful given that new duplicates appear all the time. 

Thoroughly Plan Out the Process

One of the biggest mistakes a lot of companies make is that they start thinking about the endgame right away instead of focusing on how data enters Salesforce. For example, if your users are manually entering data into Salesforce or making edits it can be very easy to make a simple typing mistake which causes all kinds of confusion. Automated data imports are not foolproof as well since a lot of time the data is incomplete and if any of the fields required by the object are missing the import will fail. Therefore you need to account for all of the duplicate data entry points and plan out how you will address all of these issues. 

In addition to planning out the technical aspects of implementing the deduplication tool, you will also take into consideration the human factor i.e. any issues the end-users will have while getting accustomed to the new product. This will also require some planning since you don’t want to make a sudden change which interrupts the workflow of your employees. Also, be sure to provide user training since it will take your employees some time to get adjusted, especially if there is a complex setup process involved. 

Set Attainable Goals

Recent data shows that somewhere between 10%-30% of the data inside a company’s CRM is duplicate data. The key metrics you should be monitoring are accuracy, consistency, and completeness. The accuracy of the data is best measured through business interaction since this provides you with real-time insights. If this is not possible, then you should use independent confirmation techniques. Pay close attention to the ratio of data to accuracy which will identify known errors. This includes missing or incomplete information that could potentially be located in a duplicate record. If all of the processes you are implementing are proving to be effective, then the ratio should increase over time. 

When we look at consistency, this refers to conflicting data. When you have duplicate records they will usually contain several versions of the truth and you have to append the entries to identify the master record and merge all of the duplicates. If you have conflicting data, you will not be able to get a complete view of your customer and you could be aligning your strategies incorrectly. This is where the completeness of the data comes in. Try thinking about all of the data scattered among duplicate records as pieces of a large puzzle that give you invaluable insights about the customer. Combing through all of the records manually or even with a rule-based application will prove to be very time-consuming if not impossible since it will not be possible to create a rule to fit each scenario. 

Constantly Collect Feedback

We mentioned the importance of monitoring some of the key metrics in your deduplication efforts, but listening to the actual people using the tool on a daily basis is just as important if not more. They could provide you with valuable insights that data may not be able to measure. For example, they could tell you that they are not trusting the tool to properly cleanse the data or that they are still spending more time than they would like fixing some of the duplicates manually and a lot of other constructive feedback. At the end of the day, you have to remember that the reason you are installing this particular app is to assist the people on the ground communicating with customers. If they are telling you that this thing just isn’t working, then this should be the most important factor in deciding to make a change. 

Don’t Postpone Deduping Your Salesforce 

While the duplicate issue may have snowballed into a big problem for many companies, they are unwilling to start tackling this problem given the magnitude and the number of resources it will require to properly deal with this problem. However, you always have to keep in mind that these duplicates are constantly draining your resources. As a general rule, keep in mind the 1-10-100 ratio. It costs $1 to verify the quality of the data you have, $10 to eliminate each duplicate, and $100 for every duplicate that is left unchanged. If you have hundreds of thousands or millions of records, such costs could really add up, which is why you should not delay deduping your Salesforce. 

Scale-up your business with Salesforce Large Data Volume Orgs and Testing

This post is brought to you by Luca Miglioli, an Information System Analyst that works at WebResults (Engineering group) in the Solution Team, an highly innovative team devoted to Salesforce products evangelization.


Introduction

In modern day-time, people constantly create data. All-day long, every day. So. Much. Data. Suddenly, your org has accumulated millions of records, thousands of users, and several gigabytes of data storage. Whereby designing for high performance is a critical part of the success of software and applications: you don’t let your largest and most important customers find performance issues with your app before you do.

Fortunately, if you work in IT you also know that the technology involved in this industry is pretty well featured with “scalability” or the ability to handle a growing amount of work by adding data or resources to the system. Furthermore, that’s why Salesforce provides Large Data Volumes Org (LDV): they have a 60BG of storage (~ 31.000.000 records!) and more space allocation for File Storage and Big Objects. They’re worth using!

Use Cases

Large Data Volume testing is an aspect of this performance testing for a large amount of data (millions of records) in an org with a significant load (thousands) of concurrent users. They can help you to find answers to questions like: can my system scale to elegantly handle a large amount of data?

Regarding this topic  and according to the Salesforce Developer’s Guide, there are a lot of use cases your business might encounter:

In general, you’d need LDV for testing and performance checks, to see if your application or architecture could handle a very large amount of data.

Salesforce enables customers to easily scale up their applications from small to large amounts of data. This scaling usually happens automatically, but as data sets get larger, the time required for certain operations grows. How architects design and configure data structures and operations can increase or decrease those operation times by several orders of magnitude.

The main processes affected by differing architectures and configurations are:

  • Loading or updating of large numbers of records, either directly or with integrations.
  • Extraction of data through reports and queries, or through views.

Data Load

You can easily import external data into Salesforce. Supported data sources include any program that can save data in the comma delimited text format (.csv). Whether we’re talking LDV migration or ongoing large data sync operations, minimizing the impact these actions have on business-critical operations is the best practice. A smart strategy for accomplishing this is loading lean—including only the data and configuration you need to meet your business-critical operations. Also here are some suggestions:

  • Parent records with master-detail children. You won’t be able to load child records if the parents don’t already exist.
  • Record owners. In most cases, your records will be owned by individual users, and the owners need to exist in the system before you can load the data.
  • Role hierarchy. You might think that loading would be faster if the owners of your records were not members of the role hierarchy. But in almost all cases, the performance would be the same, and it would be considerably faster if you were loading portal accounts. So there’s no benefit to deferring this aspect of the configuration.

Data Extract

You’ve been tasked with extracting data from a Salesforce object. If you’re dealing with small volumes of data, this operation might be simple, involving only a few button clicks using some of the great tools available on the AppExchange. But when it comes to dealing with millions of records in a limited time frame, you might need to take extra steps to optimize the data throughput. Here are some hints:

  • Chunking Data. When extracting data with Bulk API, queries are split into 100,000 record chunks by default—you can use the chunkSize header field to configure smaller chunks, or larger ones up to 250,000. Larger chunk sizes use up fewer Bulk API batches but may not perform as well.
  • Idempotence. Remember that idempotence is an important design consideration in successful extraction processes. Make sure that your job is designed so that resubmitting failed requests fills in the missing records without creating duplicate records for partial extractions.
  • Caching. The more tests you run, the more likely that the data extraction will complete faster because of the underlying database cache utilization. While it is great to have better performance, don’t schedule your batch jobs based on the assumption that you will always see the best results.

Tools

There are a lot of tools you can use for performing these upload and extract operations:

  • Data Import Wizard (up to 50,000 records): An in-browser wizard that imports your org’s accounts, contacts, leads, solutions, campaign members, and custom objects.
  • Data Loader (up to 5 million records): Data Loader is an application for the bulk import or export of data. Use it to insert, update, delete, or export Salesforce records.
  • Dataloader.io (varies by purchase plan): it has a clean and simple interface that makes it easy to import, export, and delete data in Salesforce no matter what edition you use. This third party tool allows you to schedule tasks and opportunity imports on a daily, weekly, or monthly basis.
  • Jitterbit: This free tool runs on both Mac and PC, and allows Salesforce administrators to manage the import and export of data. It is compatible with all Salesforce editions and supports multiple logins.

Best Practices

Be aware of the Governor Limits. Follow best practices for deployments with large data volumes to reduce the risk of hitting limits when executing jobs and SOQL queries. Here some examples:

  • Large Data Volume testing is only performed against “sandbox” or test org. Keep in mind that you don’t work in production and you’d need to migrate these data and configuration to another org to go live.
  • Use the Salesforce Bulk API when you have more than a few hundred thousand records.
  • Disable Apex triggers, workflow rules, and validations during loads; investigate the use of batch Apex to process records after the load is complete.
  • When updating, send only fields that have changed (delta-only loads).
  • When using a query that can return more than one million results, consider using the query capability of the Bulk API, which might be more suitable.
  • Use External or Big Objects: with the first, there’s no need to bring data into Salesforce and with the second you can provide consistent performance for a billion records or more, and access a standard set of APIs to your org or external system. In general, you avoid both storing large amounts of data in your org, and the performance issues associated with LDV.

Monitoring

Finally, it’s important to monitor the situation: in the Setup > Data Usage section one can easily find all the information he wants, for example, see how much space is used or what is the object which sizes more.

Conclusion

With the exponential growth of data in the age of IT, it’s becoming more and more important for customers to have integrated, end to end solutions in place for storing, archiving, and analyzing their data: the Salesforce platform offers several features that make it easy to develop a common-sense approach to data management that can deliver happier constituents, a more effective user experience, improved organizational agility, and reduced maintenance & cost; you only have to use it!

Page 5 of 25

Powered by WordPress & Theme by Anders Norén