In an era where human resource surveys are plentiful, Mercer remains fully committed to providing the highest-possible quality across our entire suite of surveys. That’s why we regularly collect feedback from real-life survey participants to help us zero in on the details that they want, which ultimately allows us to provide the most relevant insights and findings possible when it comes time to release our reports.
As it turns out, a recent Mercer survey found that 44% of our users consider the quality and reliability of our data to be the main reasons why they choose us over our competitors. Naturally, this feedback is incredibly important to us, as it justifies the resources we allocate to prioritizing quality assurance activities with each survey, every year.
In order to deliver a survey that both meets the needs of users and reflects the latest changes in the labor market, our quality-assurance process starts long before a survey opens for participation. If you’ve ever been curious how we get from the initial participation phase to the final survey launch, this step-by-step overview will tell you everything you need to know. Let’s dive in!
Starting From Scratch, Each and Every Time
We consider each survey, every year, to be a new project – even if the survey is in its 25th year! To make sure each survey is the best it can be – from pre-participation, through data collection, to analysis and publication – we start by establishing a project plan that takes into account lessons learned from past seasons, incorporates advances in technology and operations, and adjusts the survey for changes we foresee in the relevant industry or labor market.
Key considerations in this part of the planning phase include:
- Participation Optimization: We carefully review the types and sizes of organizations that participate in our surveys. We also analyze the quantity of participants for our various surveys, as well as the general date ranges in which participants typically submit their completed data.
- Survey Question Refinement: We examine any questions that resulted in incomparable and/or insufficient answers, then consider how the wording of these questions might have played a factor in the quality of the answers we received. While doing this, we also keep in mind the need for any year-over-year comparative analysis that should be extracted from the data insights.
- Emerging Topics: We often introduce additional survey questions to help us explore newer or trending topics within the survey, which could involve anything from emerging positions to the “hottest” industry policies and practices. These new questions are directly tied to what we hear from participants regarding the info they need to make more strategic HR decisions in their industries and sectors and then ask those questions on the survey..
During the planning phase, we also ensure specific quality standards and expectations are determined for each survey, and there are many ways in which the precise level of quality is defined and set. More specifically, we look at potential data deviations for each survey, defining what is and isn’t acceptable in the process.
Defining What’s Acceptable...and What’s Not!
For each survey, we engage in deep internal discussions about what sorts of deviations will and won’t be acceptable. With any survey, Mercer anticipates a certain percentage of participants to provide data that deviates from the norm. These deviations vary for each survey based on factors such as industry type and participation size.
Larger surveys with many participants, for example, are more likely to garner more than enough responses to gain statistical significance. Therefore, survey owners won’t be as stringent about obtaining every single data point from every single participant, but rather, will plan to intervene only when data lies outside the tighter deviation threshold.
Conversely, for a smaller survey that collects data from a much smaller pool of participants, Mercer’s quality assurance teams will plan to conduct more thorough data-validation efforts because of that smaller sample size. So, in essence, this means that the smaller the sample size, the more the survey team will go the extra mile to compensate for any missing data points from their participants.
The Next Step: Ensuring Quality During the Submission Phase
Across the board, Mercer applies consistent standards to every published data report or survey result. So when a particular survey is open for participation, quality efforts continue alongside the data collection. These practices are put in place to ensure a consistent level of integrity.
Making Participation as Painless as Possible
When it comes to choosing a survey data provider to work with, a recent Mercer survey showed that 89% of participants said “ease of participation” plays a role (while 36% said it plays a major role). As such, we put a lot of effort into developing communications, webinars, and refinements to our surveys for the upcoming year. Through Mercer’s award-winning innovation, Data Connector, clients receive a platform to upload data securely, make matches to Mercer Job Library catalog, and address any audit. The machine learning and proprietary algorithms within Data Connector have elevated the participation process from the archaic approach of exchanging Excel files between Mercer and clients. With this innovation as the new standard for participation, the self-service dashboard gives participants the ease, guidance, and insights to participate successfully.
After decades of first-hand experience collecting and compiling data, we know that the process can be a complex and even confusing thing for participants. We try to ease this burden by vividly detailing what is expected from them as early as we possibly can, and Data Connector is unparalleled in the market in accomplishing this.
Streamlining the Job-Matching Process (and ensuring accuracy while we’re at it!)
After data ingestion, the participant has the ability to ensure appropriate position matching, which is based on Mercer’s newly improved Job Library. The Job Library itself is constantly growing, becoming more and more expansive as more and more jobs are added, and more and more surveys leverage the data. One of the biggest improvements to the Library data source is that there are now 500% more positions available for matching. Yet even with five times as many jobs, the system is also smarter and more intuitive than ever. This decreases the amount of time spent reviewing matches while improving the efficiency of job matching as a whole.
Even for surveys with jobs that aren’t matched using the Job Library, position lists will vary from year to year. Our team still proactively informs, and in some cases works directly with, participants to explain any new or altered job positions, as well as what those new positions encompass. This helps organizations more easily match their existing positions to any new or changed positions in our database, which is especially helpful in instances where certain job data wasn’t being collected previously but is now required.
Frequent communication is also helpful when it comes to dealing with hybrid positions where employees play multiple roles that could potentially fall under multiple job descriptions (for example, an accountant who also does a lot of HR work). These hybrid positions can make it hard to match a person’s job accurately into one particular category. This is why our staff work closely with that participant’s organization to determine whether his or her job should be included as a hybrid role or whether one specific component of the job should take precedence.
Verifying Validity for Repeat Participants
As data submissions begin to roll in, our team embarks on the next critical step towards ensuring top-level survey quality. With the help of automation (and some keenly observant survey specialists), we analyze submissions in a variety of ways, keeping an eye out for any abnormalities or discrepancies that might compromise the accuracy of the survey.
For instance, if we notice a large change in a data point from one year to the next, we go to great lengths to validate that data’s accuracy. If an organization reports questionably higher or lower revenue streams than they have in previous years, we’ll want to confirm whether or not those changes are actually legitimate. Of course, anything from human error to recent mergers or acquisitions could be the culprit behind these changes, but it’s always worth double checking when data fluctuations seem too good (or too bad) to be true.
We also double-check data discrepancies or complications that potentially stem from dealing with parent/subsidiary data reporting. If one of our participants happens to be a parent company, we like to work alongside their representatives to make sure all the info they code and submit is correct at both the parent and the subsidiary levels. We also take special care to analyze and ensure that the parent company has selected the appropriate industries for itself and its subsidiaries.
The Aggregate–Cleaning Phase: Outliers, Deadlines, and More!
Once all participants have submitted their data, our teams begin the process of aggregate cleaning, which is almost always the most time-intensive quality-assurance component; in fact, in 2018, Mercer spent 1,305 hours on aggregate cleaning of the US Mercer Benchmark Database – our most popular survey. But it’s always for good reason – after all, a large portion of this cleaning phase involves looking at all the submitted data from a job level, then identifying any outliers, as well as how these outliers should be handled.
Time to Get Technical – But Just for a Moment!
Before moving on, let’s quickly define what a “z-score” is: it’s the number of standard deviations away from the aggregate mean that a data point is. More specifically, it’s a measure of how many standard deviations above or below the popular mean a raw score is. A z-score is also often referred to as a “standard score,” and it can be placed on a normal distribution curve to help with important calculations.
When we’re cleaning the submitted data, we use z-score-focused AI automations to identify any critical data points that are either above or below designated thresholds. If a certain data point falls outside of that designated threshold, then it’s most likely an issue that needs to be addressed.
Addressing Outliers
When it comes to accounting for these outliers (whether identified by a z-score, manual audit, or some other way), there are many steps that go into verifying whether the outlying data is relevant, or if it should be omitted from the final results.
For example, just how far out of range from the norm is this position? Or, perhaps even more importantly, just how many outliers are there? Sometimes, if it’s just one or two small outliers, they will be excluded from the data compilation. However, if it’s determined that this one anomaly is of particular significance to the overall data set (e.g., a high number of employees tied to that data point which would affect the overall statistics), then Mercer’s survey owners will chase down the submitting organization and work with them to figure out more about the data reported and validate or correct it. An increase in outliers is sometimes indicative of changes occurring in the market with a specific position, skillset, or industry, rather than an anomaly or error. The information gained during this validation process can be helpful in formulating insights.
Another determining factor in whether or not Mercer’s survey owners will track down these outliers is the survey sample size, which ties back to the matter of predetermining acceptable levels of deviation. Generally, the larger the survey, the more acceptable one or two outliers will be, while the inverse is true for smaller surveys with smaller sample sizes.
Dealing with Deadlines
Perhaps more than anything else, validating outlying data submissions also comes down to the matter of time. If these outliers are identified early on, then there’s ample time for us to confirm validity. As a general rule, survey owners should be mindful not to delay the release of our survey results while attempting to address outliers, but in the rare instances where we are forced to delay the release to seek out additional outlier-related info, then it’s always because that missing data will significantly improve the quality of the overall survey.
After all, our primary goal – above all else – is to always provide you with the highest-quality and most statistically sound human resource surveys possible, no matter who the participants are. Therefore, if we do ever choose to push back a survey’s publication, it’s never a decision that is made lightly; instead, it’s one that’s made with careful thought and consideration beforehand regarding just how much those potential corrections could improve the end product.
Survey Managers and Data Analysts: Building Relationships and Helping Participants
Our dedicated team of survey managers update and keep current the architecture of the survey. They connect the needs of the clients to the proper surveys and provide additional insights to the participants that complement the survey. During the collection of data timeline, data analysts conduct regular data audits, then provide those audit results to you to better confirm data-point accuracy. Once the audit findings have been provided for validation or corrections, the updated data comes back to our data analysts, who then thoroughly re-analyze the updated responses. If the numbers still seem off, then one of our data analysts contacts participants directly to assist them in fixing any data errors before proceeding further.
Our survey managers and data analysts also focus heavily on healthy relationship development. Much like an account manager at an advertising agency would be involved with frequent back-and-forth with their clients, our survey managers are tasked with carrying on dialogues and building rapport with our survey participants. In some cases, our team will even call or email you to discuss any data discrepancies one-on-one, giving you the opportunity to provide direct input that eliminates any remaining confusion. This unique relationship-building process allows Mercer to stay tapped into what’s top-of-mind for HR professionals across a range of industries and organization types.
Optimizing the Survey Process for the Future
We’re always looking for ways to further improve the quality of our data. As a result, we very much welcome any and all feedback in regards to the following:
- How can we streamline the entire data collection and compilation processes from beginning to end to make it even easier for you?
- Are there certain aspects of the participation process that prove more difficult than others when trying to collect and submit data?
- Is it difficult for you to meet the established submission deadlines each year?
As you can see, operational excellence is always top of mind for us at Mercer, and we know it’s what our clients value when they come to us for the latest HR insights and findings. Therefore, it only makes sense for us always to seek out fresh input on how to improve our processes to create better, more relevant surveys, regardless of industry or location. At the end of the day, our goal is to ensure that you always have available to you the most accurate Mercer data possible and the best customer service, regardless of which human resource survey you’re participating in or purchasing.
View a complete list of Mercer surveys available for the United States or Canada.