MaxDiff is a methodology we love for message testing in advocacy campaigns. Increasingly popular in marketing and product research, Pluriel applies this method to the public affairs and strategic communications space to deliver relevant, accurate, and actionable insights.

A central component of any strategic communications or advocacy campaign is your message. One way to quickly and cost-effectively conduct message testing is with the “MaxDiff” approach we use at Pluriel.

For example, if your organization wants to persuade Canadians that we need to build more housing, should you lead with a message about cost, supply shortages, or commute times?

At Pluriel, we’ve implemented a variety of approaches to message testing. We’ve come to love MaxDiff because it clearly identifies the most persuasive messages, it ranks messages from most to least persuasive, and it allows us to analyze what works best for which subgroups of the public. 

Second-best approaches

Before we get into what sets MaxDiff apart, it’s important to look at the other approaches commonly used. 

One methodology is asking respondents, for example, whether they think a given statement is a strong/persuasive or weak/unpersuasive argument. While this can tell us how your audience might evaluate a message, it treats that message in isolation—hardly a realistic “simulation” of the communications environment you’re operating in. 

Another methodology is to ask a respondent’s opinion about, say, a policy. Then you give them one version of your message, and ask if it makes them more or less likely to support the policy. And then try again with a different version of the message. While this is an improvement, it can be cognitively taxing for respondents—and when your survey respondents are fatigued, it can be hard to separate the signal from the noise. 

A third methodology is to use an experiment (sometimes also called A/B testing). You give two or more randomized groups a different message, and test whether, for example, there’s a difference in, say, support levels for a policy between these groups. If you’re randomizing correctly, you’re identifying the effect of that message (and that message alone) on some outcome, and you can compare the effects. This design is great—but unless the messages have large and very different effects, you’ll need a huge sample to identify those effects, and that can get very costly very quickly. Moreover, if you want to see if the effects are different for different types of people, you’ll need an even bigger sample. 

Maybe you’ve used these methodologies before. We think there’s a better way.  

How MaxDiff solves message testing problems

MaxDiff is a measurement technique that we borrow from marketing and product research. In that traditional setup, respondents would see a list of product features, and they would say which of these is most important and which is least important. For example, do consumers prefer a cell phone plan that is (i) low-priced, or (ii) offers widespread coverage, or (iii) includes a free streaming subscription. 

Recently, innovative public opinion researchers in the advocacy and strategic communications space, like Pluriel, have begun using MaxDiff for message testing. 

Using this methodology, we ask respondents what they think are the strongest (or most persuasive) arguments for a policy vs. the weakest. Instead of product features, we use advocacy messages. 

Let’s say you’re an organization trying to persuade the public to support the construction of new housing, and you have 5 messages you want to test:

  1. Unaffordable prices
  2. Shortage of homes for growing families
  3. Housing shortage leads to higher commute times
  4. Only the rich can afford to live in desirable areas
  5. Lack of access to shops within walking distance 

The MaxDiff methodology presents respondents with a set of these messages, and respondents identify the best and worst arguments; then they do the same for other sets of arguments. The survey might look something like this:

After applying statistical analysis, we can create a rank-ordering of your messages as well as effectiveness ratings for individual messages. We can uncover not just what messages people like, but how much they like them compared to the alternatives. 

Moreover, we can conduct fine-grained analysis on what works best with subgroups of the population. 

We love MaxDiff because it doesn’t just give us insights into the public’s surface-level preferences. Other methodologies might tell us that these are all strong messages! But when respondents are forced to consider trade-offs, their responses tend to be more revealing, honest, and closer to how they’ll confront the issue outside of a survey environment. 

MaxDiff is a powerful method for message testing—it’s our gold standard—and Pluriel is the research company that can unlock its full potential for your organization or business.