If you have read the previous articles in this series, you may notice that one of the main themes presented for customer experience (CX) professionals or market research managers is “know before you go.” Program strategy informs project details, which define a sampling plan and allow you to design a relevant survey. There is one more important step to take before fielding: Put together your analysis plan.
How to Put Your Analysis Plan Together
When aggregating response data, you’re summarizing customer opinion or identifying statistics that help you make decisions to act. You are looking for a mean, or percentage or frequency distribution that leads to an insight. Before you start, it is essential to carefully plan the words you use when designing your survey and you need to know why you have chosen to collect the data in a particular format and how you plan to use it.
Internal stakeholders often seek immutable facts or established relationships that support a request for internal resources. Decision makers in many organizations will wait to make a change until an analyst says definitively: “This action will drive X amount of $ per customer.” This kind of conviction is not only rare but difficult to produce quickly, and the people pushing for an answer aren’t always patient. Setting the right expectations for success and delivery is crucial.
Create organizational buy-in on your plan for analysis before the survey project is launched. Define the type of data you need, and what result would lead you to take action. To design metrics that drive decision-making, think about the following elements as early as possible when dealing with an internal problem or request for information.
Related Article: Getting to the Heart of Data-Driven Experience Optimization
Understand the problem you are trying to solve. This sounds basic, but I rarely see requests from an organization that seek a testable answer to a particular question. Create a “null hypothesis,” phrased in a way that your data analyst can disprove with an “alternate hypothesis.” For instance, “Customers are satisfied with our product at least 80% of the time,” can be disproved by a proportion of sampled customers who are dissatisfied more than 20% of the time.
Far too often, organizations collect data without a research question in the hope that one day it will be useful. Encourage your stakeholders to develop questions that engender specific actions, such as whether to change a policy, update marketing materials or refine workflows.
Focus on the End Result
What kind of data do you need to answer the question? Are you asking a quantitative question (How many?) or a qualitative one (Why?). What kind of data answers the question (Nominal, ordinal, interval) in the best way? Is the scale forced-choice or centrally-weighted? Above all, know why you are collecting this data, and to what benefit.
Drive Insight With Study Design
Include as many yes/no questions as possible. It allows you to skip people based on actual opinion or behavior, and produces a variable that makes it easy for any data analyst to find relationships between variables by generating naturally occurring segments. Allow the respondent to chart their own path through your queries while simultaneously reducing the cognitive effort required of them. Hypotheses are easily tested when the variable is yes/no.
Related Article: How Thoughtful Surveys Generate Valuable Customer Feedback
Choose the Right Survey Response Scale
Questions abound about survey response scales in CX. There are many ways to display a scale and different ranges to choose from. Practitioners are often questioning which is the “right one.”
The most important feature in determining which scale to select is identifying how well it reduces the cognitive load for the respondent, but a secondary consideration should be the needs of your data analyst. Your scale should inspire analysis and inform the stakeholders to whom you are presenting quantitative results.
Your survey data should be a record of a quality conversation. If you are relieved not to “rock the boat” each month because your metric is unchanged for months on end, it is time to reevaluate the time you are spending on collecting the data. Dump the data that has no variability, especially data that has no seasonal variability. The quality of data and subsequent value of your analysis will suffer if your customer answers a question again and again and doesn’t hear a word back or see a change in their experience.
By integrating these elements, you can start the process of data analysis before you even field the study. At the end of the day, you want to be able to justify the statistical assumptions that you have made. The best time to do that is before the first response comes in.
Eddie Accomando, XM Scientist at Qualtrics, is an applied anthropologist who has 25 years of experience in the design, deployment, and maintenance of enterprise-wide CX programs. A strong methodological focus can be brought to bear on real-world programs, and he applies qualitative and quantitative research techniques to reveal insights that drive action within organizations.