How to Use Surveys to Get the Data You Need, When You Need It

Download MP3

Louis: Recording now, let's do this. Bonjour, bonjour, and welcome to another episode of Everyone Hates Marketers.com, The no-fluff, actionable marketing podcast for marketers, sick of shady, aggressive marketing. I'm your host Louis Grenier. In today's episode, you'll learn how to use market research, in particular, surveys to get the data you need when you need it.

My guest today is a director of product marketing at SurveyMonkey. You might have heard of SurveyMonkey, which is the biggest survey company in the world. They have a lot more product than that, but there are many known for this. She's been there for five years, and she started her career in marketing effectiveness analytics at Nielsen, which is the global measurement and data natives company.

So, she lives and breathes data, and I'm super happy to have you, Morgan Molnar, on board.

Welcome.

Morgan: Thank you so much for having me, Louis.

Louis: So market research surveys, all of that, as I was just telling you before, before hitting the record button, I think there's a lot of stigma around it. A lot of miss conceptions, a lot of lies told about market research.

But before we talk about those a bit, can you, if you had to define it in a few words as, as few words, as you can, what is market research to you?

Morgan: Oh, yeah. Well, so market research, you're right. There's a lot of stigma around it, but at its core, it's gathering information from the outside, from the market to inform decision-making and actions that you take in your business, simple as that, gathering information. And it can be in a lot of different ways. Primary, secondary research, qualitative, quantitative research, but at the end of the day, it's just gathering information to help you make a better decision.

Louis: So let's then define briefly again, cause you've done such a good job for the first definition, primary versus secondary, and then a qual versus quant.

Morgan: Yeah. So when you're looking at primary versus secondary research, primary is gathering that information yourself. So you are going out there, you are collecting the raw data. Secondary research means you're consuming data that someone else has already collected for you. And you may be repackaging it or using it in a different way, but you're not the initial data gatherer of that, of that research.

Okay. Yeah. So qualitative versus quantitative. Quantitative, the easiest way, you think quantitative, quantity numbers. That's the way I think about it. So quantitative research is rooted in harder statistics and numbers. Qualitative can be a little bit more fluid. It can be whether it's analyzing images or texts or having an interview; it’s a little bit more unstructured in that way.

And you know, there's a lot of pros and cons to each and a lot of different, things you can do with either qualitative or quantitative, but usually when you bring the two together, that's when you can tell a really rich story where you've got the numbers, but then you've also got the why or the experience or the emotion behind the numbers.

Louis: Thanks. that makes a lot of sense, and yeah, it's great to be able to define those key terms. Cause I suspect we're going to use them, throughout this, this conversation. So I've been very lucky to have been able to speak to Seth Godin twice, even I'm showing off here, but like he's one of my heroes, he's the, I think, one of the first marketing books I ever read was from him.

And anyway, in the second interview. I challenge him a bit when he was talking about the fact that he doesn't really use anything but intuition to test stuff, to launch products. He doesn't seem to be a very big fan of interviewing people or interviewing customers, sending surveys, or anything like that.

He rather, you know, show something to people, see how they react, and then adjust. And it's difficult to disagree with someone like that, who's done an amazing job at making marketing more available for everyone, but it surprised me a bit because, to me, market research is a huge part of my, in what I do as a marketer.

Almost every time I learn something new, I start with customer interview surveys and whatnot. So the stigma that he has is the fact that. You know, people don't know what they want, so they can't tell you, and basically, market research is a bit skewed and all of that. So what do you say to that kind of comment?

Morgan: That's a very Steve Jobs mentality as well. I've heard that take and the way that I would pushback or challenge... there was a small piece of what you said in there is where he likes to put something out there, how people react to it and then adjust. Market research is a lot of doing just that.

But before you've done your major launch. So you can say have early prototypes, early pieces of marketing copy or messaging or creative, get it in front of people early, get feedback, iterate, and all of that can be done very quickly and rapidly in any sort of either product or for campaign development process before you launch to the masses. In doing so, it's that same kind of mentality.

Like I want to make something, I want to build something on a prototype and then get feedback. But what if you're doing that before you launch, you're likely saving a ton of money, you are able to iterate faster. If you think about getting something out into the market, it has to be pretty baked.

You know, in tech, we talked about MVPs a lot, minimum viable products, but it has to work end to end. When you're doing market research, you can kind of chunk that up and, and does the same thing that Seth is talking about but in pieces, And before you launch it before you spend a lot of money. You know, in that way it can, it can help a business greatly.

Whether you're talking about startups or even more established companies that are maybe launching new features or launching new products or expanding into different markets.

Louis: So that's. That's nice for yeah to test new stuff; you want to test creative copy and whatnot. So I understand that in this instance, you basically still look at the behavior of people.

They might pick an option over three, and they might give you some feedback. So that's, that's close enough to a real-life scenario where people will actually behave. But what about then cases that are a bit more, you know, less close to the, to the prototype. More like closer to the actual primary research gathering data around, I don't know, people's pain, challenges and then what they want to do and whatnot for this type of scenario like what do you think are the... do you have sort of no-noes. Do you have some of the biggest mistakes you see when people send surveys, oh, not send, but design surveys in the first place? So let's talk about just talking about surveys specifically. What are, do you think are like the top three mistakes for people listening right now who wants to send surveys?

Morgan: That's a good question. Yeah. And so you've kind of reigned it into survey research, and, you know, I think that's obviously a place that I know a lot about, having been at SurveyMonkey for five years. Yeah. I'd say that when you're starting out writing a survey, and you've never done it before, you look at that blank canvas, and it, and it can be really intimidating.

I write survey questions in the way that I talk; the way that I would ask you a question on this podcast is a way I like to write survey questions. I like to make it really good colloquial, and you do not use a lot of jargon. I think people get really formal when they write surveys. Which can be good in some instances, but can also be a mistake, especially if you're talking about really in-depth topics or niche, you know, jargony word terminology that maybe is only relevant to you or people who are really intimately familiar with your, your industry. So that's one thing, I mean, you need to, you need to write questions in ways that people on the receiving end will be able to understand them and respond appropriately.

Then, the other thing is that, man, there's so much bias you can introduce into a survey, and you could probably do an entire podcast just on that, but. The one that I always harp on as, as, as a fundamental best practice is avoiding biases in the question wording that might lead a respondent to, some sort of, you know, how do I want to phrase it, phrase this basically, showing up to the researcher, the way the researcher. The way, the researcher wants them to show up.

So if you're asking a question, for example, this happens a lot in political polling. If you're asking, you know, a favorability question about a particular candidate or something of that nature, and you add in adjectives, or you add in. framing of a question that leads a respondent down a particular path.

That is a big no-no. You'd like to have questions that are balanced, that are, that have equal opportunity for positive and negative responses. And that's going to lead you to a much more unbiased piece of data and, and, and results that come from that. When I think about the answer options themselves, a lot of times, people will make a mistake as far as the, I guess, the golden rules of survey question writing, which is making sure that your answer options are mutually exclusive and collectively exhaustive. And what I mean by that, cause that's kind of a mouthful, is mutually exclusive means you just have all answer options are never overlapped. So when you're talking about, let's go with this very simple question, how many hours of TV do you watch in a given week?

You don't want to have answer options that are one to three, three to five, five to seven because the threes and five show up in multiple answer options. So that's, that's one thing you want to avoid. The next would be collectively exhaustive. So you want to make sure that your answer options span the entire range of, of the possibilities of human behavior.

So in that same question, you want to make sure you have an, I don't watch the TV option. You want to make sure that your highest hour count has a, and more at the end so that, you're, you're basically capturing all possible responses that could, that could be for that question. So those are probably the things I think about.

It's very granular, it's very tactical into survey writing. you know, I think we could probably even step back a little bit and talk about, you know, some of the benefits of surveys, if that makes sense, Louis.

Louis: Yes. So before we talk of benefits, just another thing I want to talk about is the, the fact that humans have incredibly complex, you know, decision making and the way, the way we do things, we rationalize decisions, even though most of them are completely unconscious based on our set of stories and based on our, what we want to achieve, it's very difficult for people to externalize, you know, why they bought something for example.

And that's something I keep thinking about when I design surveys or ask a question. It's very difficult to take everything as face value and consider every single answer to be the truth. So I'm curious for you, how do you advise folks to, to ask the kind of questions that are there to understand why someone has bought something or why they took this decision?

And even the no-no's like the, on the other side, the no go's questions such as would you buy something like this, or, you know, are you interested in this like stuff that is predicting future behavior? So how you deal with this in general?

Morgan: Yeah. Well, so you make a good point and not, you know, intention, doesn't always equal behavior and that's true with surveys, especially if you're asking future-forward questions, what are you going to do?

Would you et cetera? The way that, so for example, in, in most concept testing, surveys, there is a, a golden question around purchase intent. And so you've just been exposed to a new idea, you've maybe answered some follow-up questions. The question that a lot of folks use as the most closely tied to what will actually happen from a behavior standpoint is a purchase intent question.

And usually, the way that this is worded is how likely would you be to purchase this product, download this app, et cetera. And you have a nice range in scale. not of, not from a likely to unlikely, but more of an absolute to not at all. And so that helps to frame it from like a positive, negative, but more it's, you know, extremely likely, very likely, somewhat likely, not so likely, and not at all likely.

So we've gotten away from the. strongly to... strongly for versus strongly against. That's kind of been something that we've seen kind of more positivity biases, skewing results, and in terms of how that maps to behavior. So that's the that's the question that we usually use. There have been some academic studies that tie that to behavior.

It's not going to be perfect. but what you can think about as you're designing a survey like that is the use of benchmarks. And so you can ask this about your product. You can ask us about a competitive product. You can ask us about past products that you've launched, where you know, the in-market performance, and you can start to build a database of results and benchmarks.

So you can compare. these things against each other. And so then what you're doing is comparing survey data, to survey data versus survey data, to market data, which is not always going to be apples to apples. And it can...

Louis: That's interesting, that's interesting because you would, so you would ask the same question, the same, like the exact same words, the exact same type of answer to for, let's say your own [inaudible] product.

You pre-launch your first products. Yeah. The survey just wants to get data. Then, you know, the actual purchase that you got out of this, and then, so you have a benchmark and then you can do the second product. Okay. We have a bit more likely to purchase. So we are, if there is some sort of correlation, then we expect that much.

So then you, as you said, you compare like for with the exact same question instead of just, you know, trying to, to reinvent the wheel or think of stuff that might or might not happen. Competitions though, with competition data, what's the...

Morgan: So for something like that, you know, you can't have your competitors’ sales data all the time.

I mean, there are some syndicated, you know, point of sale data providers that do offer syndicated data, but that's really expensive. If you're a startup, that's not going to be something you are going to have at your fingertips. So what you can do is set up essentially the same. It's kind of like an experiment, but using surveys. You set up the same survey, but instead of your own product concept, you use a competitor's product concept.

And, in that case, you, you set up the survey the same way, you target the same people and then you can, again, that's, that's comparing to responses and you can set that up with, you know, competitors who are at the top of the pack, competitors who are middle or, or at the bottom of the pack.

And, you know, you don't have the exact sales data to tie that back to, but you have a sense generally of market share or, of the most popular thing in there, in the market. And so then what you can do is, is do that comparison and just see, okay, how does my, my concept perform against things that are already out there in the market.

Louis: So how would you let's dive into that a bit? Cause that's, that's quite interesting. I've never done it before. so I'm genuinely curious. What's the, so let's go through just a hypothetical here. Let's say we're setting new toothbrush studies, fucking phenomenal because it's actually fresh.

Morgan: Yeah. The best toothbrush ever!

Louis: Actually, actually remove the yellow on, on everyone's teeth with, with just one, one, one use. And we compare that to the, to the likes of like top of my head Colgate and all of those big brands, whatnot. We want to know whether this concept is going to work with toothbrushes that already exist. Obviously, we're not going to design the survey entirely together right now, but if we had to pick maybe the top two, three questions that you would ask to get an understanding of whether we can expect some sales and whatnot. How would you design this?

Morgan: Yeah. So the main structure of a market research survey is going to have some initial screening questions perhaps to make sure you're targeting the people that you need. Then you're going to have the meat of your survey and then potentially some demographics at the end.

So you can filter slice and dice your results. So for a study like this, where essentially what we're doing here is a concept test. We're trying to test whether our product idea is viable and, you know, ready to launch and whether there'll be a good reaction, or per reception in the market. And in this case, you've got, you know, your, your Colgate brush, your Oral B brush. What you want to do when you're picking the stimuli for a survey for a concept test is, essentially, what that means is what is the idea that we're presenting in front of the respondents to react to. It could be a text description, an image, a video, a GIF, et cetera, embedded in the survey that you're then reacting to with follow-up questions. Then, the stimuli, in this case, you've got, you know, your idea unclear how well baked that is, let's assume that you've, pretty, you know, you've developed it pretty far and you've got product images or maybe even what it might look like in the packaging, or maybe even testing the price point.

You want to make sure that the stimuli you're testing are similar enough. across what you're, what you're trying to get feedback on.

So in this case, if I've got a product image in the packaging with a price point, I want to create that same look and feel and image with my competitor’s products. So maybe you go and put your competitor’s toothbrushes and you photograph them in the same way. You've photographed your own products, packaging, you, you put the actual price point that you received in the store, and you and you create those stimuli to be pretty equal.

The only thing that we're testing here is the difference in what the consumer is reading as far as the pricing and on the packaging and you know, the visual of the color and shape and style of the toothbrush. and then, so what you're doing in the surveys, you're first embedding that stimulus, and then you're asking follow-up questions.

The few questions that you know, for a product like a toothbrush. So it'd be really interesting. I mean, I mentioned purchase intent. That's going to be one of the main questions you might ask something around overall appeal. You know, how much do you like this product? How much would you be likely to buy this product?

Is the price point, you know, too expensive to, you know, so cheap, you think it might be poor quality, things like that, and you know, there's a variety of different kinds of metrics or attributes you could ask about these different products. What I always like to do is keep this survey at a five-point scale in terms of the answer options.

I kind of mentioned this before, but like extremely, very, not, or somewhat not, so not at all. And then whatever your metric is in those answer options, that five-point scale, if you keep that consistent across all of those follow-up questions, it makes it really easy to analyze your results because you've got everything in the same scale. You can compare, you know, the top score or the top two scores combined, it's called a top-two box. And, you know, what you're able to then do once you have your results is compare those scores across all those different attributes or metrics for the different products.

So your, how does your product then score against your top two competitors, which by the way, if they're very established like a Colgate and an Oral B, for example, you can bet that they've done some research on their products that they've got some of their best stuff out on the shelves. And if your product is doing well against those, then you know, you're, you're pretty, you've got a pretty solid amount of feedback or evidence that what you launch is going to work or going to, you know, sell is really good at the end of the day, what you want.

Louis: Yeah. Yeah. And because if you compare yourself with Metro products that are very well known, you, you will also see that, you know, the more people are aware of something, the more they are liking it. and so the more the leading brand will get favorable. It's a proven fact as well. And what's interesting there that you bring up is if your product scores well against something that's established and in the, in the market, it's overcoming some of that brand awareness and brand affinity, brand trust, et cetera. That's been built up over decades for some of, these brands. And so that's a really good indicator. So let's, let's backtrack a bit because I'm pretty sure people are asking themselves. Okay. That's all well and good, but where do you find the people?

And I love that. Before, before you answered that, the first, the first thing I'm wondering when you design such a survey is, do you ask the same person the same question on the three or do you basically do a three-way test where they have an audience of, let's say 6,000 people and two thousand will get your product, 2000 will get a competitor one, two thousand would get competitors too.

Morgan: So you're, you're really hidden at the fundamentals of concept testing here. What you're describing is the difference between a monadic design and a sequential monadic design. A monadic design is where a single respondent sees a single product or stimuli and ask that a single set of follow-up questions. Sequential monadic design means that they may see a series of products or stimuli and maybe a random order, and then are asking the follow-up questions on all of those. There are pros and cons to each. Monadic is usually touted as the more pure, methodologically, pure, or theoretically pure design because what you're doing is you're getting a fresh set of eyes for each of the concepts.

They're not biased by what they saw previously, and they won't adjust their responses. They're just reacting to the first concept that they see. What you want to make sure if you are doing that is if you've got subsets of, of respondent groups who are answering questions about each of the different products that you're testing is to make sure that the balancing and the targeting of those respondents are as comparable as possible, because then you don't want to make sure there's any, you know, one, one subset skews, much more female, one more male, then, you know, it's really hard to compare the results across.

For sequential monadic, really the pro is a cost in a lot of ways. If you're purchasing responses from say a panel, as we have at SurveyMonkey, then what you're, what you're doing is. you know, for, for a monadic design, you're basically having to pay for three X, the number of responses, if you're testing three products. But with the sequential design, then you can randomize those concepts and pay for fewer responses.

Now, I'd say, you know, when I recommend monadic design is when you've only got a few stimuli to test. When you're saying, try to test maybe, I don't know, 10 to 15 different concepts, which I have seen customers that see you're doing this all the time, I recommend some sort of combination of a sequential design where maybe respondents are seeing a subset of five random concepts that you're testing. It just, it's like a little bit more bang for your buck in that way.

Louis: Okay. That makes sense. and you touched on the other question I wanted to ask, which is the panel side of things. So I've done it multiple times in my career where you want to get data from people who might not be customers of your brand. And you just have a customer profile in mind, a real customer persona, or at least, you know, category entry points. So people buying such a specific category like toothbrushes, and you want to reach out to those people outside of your brand customers too, to ask them some questions.

So SurveyMonkey does that, right? And I don't want to go too much into the tools right now as of 2020, because if people listen to this episode in five years, who knows what it's going to be, but we can touch on it a bit. So the panel is basically a way to buy responses from folks who answered them, in their free time, because they're paid to do it, is that correct? I mean, for the SurveyMonkey side..

Morgan: Yeah, it depends on the panel. and we can take the toothbrush example and continue that. Let's say you are at a toothbrush startup and you don't have a lot of customers yet. That limits you as far as who you can reach out to. You probably got a lot of friends and family and social media followers that you can reach out to, but you might not have a broad nationally representative group of people that you can get feedback from.

So panels are a great way to do that and source and purchase the responses for your survey, in a targeted way, so exactly who you need to reach. Most panels and there are a couple of different iterations of this, but most panels are at the very root of them, a collection of people who have signed up to complete tasks in order to receive an incentive.

And that incentive varies. the tasks even vary. Some panels are not just survey panels. They could be other types of, of task-based, you know, and then the incentives vary because you could have pure cash payments, or you could have points towards redeeming gifts or, or gift cards, or in SurveyMonkey's case, our contribute panel actually donates to a charity of their choice, for taking the survey.

And so there's a, there's a variety of different incentive models and even that varies country to country. yeah.

Louis: My, my biggest, question regarding panel is, it's something that baffles me because obviously, I have never done, I never replied to survey in exchange of incentives or whatnot, but when I've done it before, when I strive to reach out to specific folks like in the B2B sector, for example, like quite targeted and whatnot, I'm always baffled at the number of answers you get from a very targeted like demographic firmographic data. Such as, I only want people in marketing, who have, like a, a college diploma, graduated from college at the minimum who earn whatever. I mean, just, I'm just making that up. And I'm surprised that there are actually people fitting those criteria, available to answer panels.

And every time I questioned the validity of the panel, thinking that can't be right, who has the time to answer those surveys? You know, if they have full-time, they are like fully employed. If they have 30 years of experience who, you know, who will do that. So is that, is that an unfounded, like the misconception that I have, or what's the?

Morgan: It's just the beauty of scale.

So, most panels have millions and millions of people in them. And to get at the targeting that you just mentioned, what panels do when you sign up is a profile, the people by asking them a series of questions, you know, How old are you? What's your age, your gender, where are you from your zip code your education, do you have children, et cetera?

And so that's how they get that information. They're usually keeping that pretty up to date, and continuing to reprofile folks, but it's, it's really the beauty of scale. So if you think about any, any funnel in marketing, you even think about, like an email funnel where you've got, a pool of people to send to only so many people will open, only so many people will click et cetera. and then convert.

So, the same thing's true with surveys. So we've got this big panel of people that we've recruited over years and there's probably a variety of its recruitment methods, that will source people. I mean, you mentioned B2B, it's interesting, some of the panels that we work with at SurveyMonkey, source people through rental car loyalty programs or things that, you know, tend to have more business folks in them and just those kinds of groups.

And, and so what you're doing, to make sure that you've got that quality is, is continuing to profile people, continuing to even use screening questions in the survey to make sure you're, you're getting the right people, but you'd be surprised, how much turnover and, and churn and then re-recruitment there goes on in the panel.

So almost, I mean, I'd say. Oh, gosh, it would be hard for me to throw out a statistic right now, but there is a lot of turnover in panels. And so what you're, what you're doing is getting a pretty fresh group of people in some of the ones that we work with. Panelists might take five or six surveys before they drop out of the panel.

And so you're having to re-recruit.

Louis: It's like any category buyers. When you look at the real distribution of light buyers as heavy buyers in any category of purchase, turns out that's at the, at the minimum, 40% of buyers are light buyers, meaning they buy once. Or less a year and that's the minimum.

And so what makes, what, what, what your answer made me think about is that a lot of folks might answer, as you said, one, two, three surveys, and then churn. And so they might, and then if you have very, very few answers a ton of survey to do that almost professionally. Yeah.

Morgan: It's kind of like that hockey stick chart.

Louis: OK cool. thanks for answering those. I want to touch on something else. I want to kind of try to reverse engineer how you've done it. You've done an actual survey for SurveyMonkey, for a specific, thing, like a naming, exercise. So I want to go through that with you and kind of try to see how we can advise folks who want to do something similar to do it, through your real experience. So tell me more about this project.

Morgan: It's going to be so meta, because everything that we just talked about is relevant to this project, in, in many ways. So what we did this year at SurveyMonkey was launched a suite of solutions to automate the concept, testing, methodology, and process. So everything I just described, you can basically do a SurveyMonkey by just clicking a few buttons and not having to write your survey or do any of your own analysis. It's really great. It's a suite of seven solutions for concept and creative testing.

And I keep, you know, even on this, this interview, I've been using the term concept testing a good amount. And so when we were setting out to name these solutions, we thought, I mean, our initial instinct was that concept testing was going to be the right name for them. And I wanted to make sure that we put some rigor into naming these products and, you know, product naming is something that is very nuanced and you can go a lot of different directions at SurveyMonkey.

A lot of our products have more practical or descriptive names. Some also have names that are much more of a brand, that, that doesn't really have any meaning. It's just more, you know, the product name and brand name versus describing what it actually does. We, for these, especially since they were under SurveyMonkey's umbrella of market research solutions, we wanted to keep them pretty descriptive so that people understood what they were exploring and buying.

When we went to go, actually test the names we use, we use the methodology I described before we used a monadic design. We introduced the product description, we then said, Hey, we're thinking about a few different name options. What do you think of this one? And then asked a couple of follow-up questions and, and what were the names?

So we tested a couple of different things, so for it, let's take the example of our, our product testing product. So this was, we have seven of them for testing ads, logos messages, whatever. Let's just use product one as an example. So we tested things like concept testing, product testing, a product concept test, like different iterations of that.

And then we also tested. product analysis and concept analysis, things that, so the test is interesting. That test word versus the analysis word, as part of the product name, was something we were interested in because you think of test or testing, it's a verb. It's, it's, you know, part of the process of doing this work, but the analysis is what you actually get. It's the data, it's the results, it's the insights, it's the recommendation of what you're going to do with that knowledge. And so we wanted to, to kind of challenge conventional thinking and you know, that the term we've been using internally as we develop the products and just see how that would work.

And when we got the results back, it was. It was very, very clear that folks reacted positively to the analysis. Part of the name in some of the iterations we tested, yeah, like by far, in a way, was the number one winner in all seven products. And granted we did that, monadic methodology and so for it to pop up in every single product name set that we tested for all seven, it was a clear pattern.

It, it really developed that mountain of evidence that we could go then back to the executive steering committee and say, Hey. We didn't expect this to win but look at the data does not lie, and let's figure out what is behind this. And we actually followed up with a couple of interviews. We had been doing user testing, you know, every week, pretty much throughout the product development process, and we put some of the names in front of folks after we had gotten the results. And for, for people who were saying, who are gravitating more towards the analysis name, we, we dug it and we asked why, and kind of what I described earlier as what people were saying, Oh, it really, the value here is in what you're getting in the analysis and the insights that it's generating for you.

So that's what I'm buying. And so what we ended up doing is we named our products that way. So if you even go on the SurveyMonkey site, you'll see product concepts, analysis, logo, design analysis, brand name analysis, et cetera, as the actual product names of those solutions that we launched earlier this year.

Louis: What was the key, the key question that you asked for those, was it a choice between one of those names?

Morgan: Yeah, well, the way that we talked about this methodology earlier, and actually it's funny, I'm going to look up and pull up the actual results here because it's fascinating to share.

The way that we were talking about the methodology earlier was okay, you, you expose the respondent to stimuli and you ask follow up questions. And that is what we did. For a good part of this survey. So now, we had to kind of do a bit of a hybrid methodology because we had seven products and four or five names that we were testing for each one.

So for us to also be purchasing B2B sample on top of that, we had to do a bit of a hybrid methodology. And so in many of the cases, we asked questions as I described earlier. So a five-point scale around things like appeal or, purchase intent. I'm pulling them up, but then the second half of the survey, we actually exposed people to the other names.

Hey, we're also considering names CD and E Now we're asking a couple of different questions, like best or easiest to understand or innovative. And we actually pit them head to head. And so in that case, you're not taking a score that's aggregative of saying extremely, very likely, you're actually just looking at the metric of, okay which name actually won out.

And so in this case we asked questions like overall appeal, uniqueness, relevance, purchase intent, but then the head to head metric, where, what do you like best, what's sounds more innovative, what do you want to learn more about, which one gives you the most confidence that it is a high-quality product and which one's more memorable.

So those were the kinds of questions that we asked. but it was interesting, to just see those results. And by far and away, the names that had analysis in them were the clear winners.

Louis: So the head to head was, let's say, copy testing versus copy analysis? Or product testing versus product...

Morgan: It would be, it was always within the same product line.

So one of the seven would be the survey we'd be asking about just solely. And so it'd be something like product testing versus product concept analysis versus product design solution or something like that. So those were the kinds of things that we put head to head.

Louis: Okay. And your dog is also very interested in sales.

Morgan: She loves it!

Louis: That's why you know, it's the right pet for you. I can't see it, but yeah, I can imagine how surveys excite dogs. So then, how do you get which audience that you reach out to you, did you reach out to customers, existing customers, as well as people who are not customers? What was the audience?

Morgan: So we actually use SurveyMonkey audience, which is our global panel to source the respondents.

And the way that I usually like targeting for B2B. I mean, as you mentioned, there's the profiling that the panels do. And how much can you trust that? I mean, I work in this industry and I trust it a great deal, but I still know that depending on how often you are re-profiling folks, people change jobs, people move across the country.

You know, you always want to make sure that you're getting the right group of people. And so in that case, I usually pair profiling with screening and screening questions at the beginning of your survey, to ask people to provide the most up-to-date information so that you can target. And in the, what I did for this one was I targeted on the profiling side, employed full time, and in, I think I selected a couple of different job functions, not just marketing or insights, or there are a couple of other functions that are interesting for us, I mean, we've got some product, solutions, you know, product management functions or even startup founders, et cetera. And so I paired that full-time employed, plus a select few job functions with screening questions that A: confirmed that profiling.

So I had screening questions for employment and job function, but I also had a question around involvement in market research at their company. And so you don't have to ask, do you do market research or are you a market researcher? You can get around that by asking about their level of involvement. So it can be, you know, you could be the sole decision-maker and hold the budget, or you could be an influencer or you could be involved or you may not be involved at all.

And, so we basically just screened out anyone who wasn't involved at all. in, in at least this, this, survey that we did.

Louis: And screening questions, are, are very important. I, I think I've made a mistake in the past of, it's easy to have a screening question that says, you know, are you involved in, in market research, yes or no? Or this kind of very easy binary question. So you need to be careful not to be too obvious about what you're screening for or against.

Morgan: It's a really good call out. And, and probably something I could have mentioned at the very top of this interview, when you were asking about survey no-noes or big mistakes people make. I think screening questions are ones that do require some thought yes or no. And part of it is because we also talked about how, you know, there's that small group of panelists who are a little bit more professional survey takers and are kind of on to you as far as wanting to qualify for the survey to get that incentive.

And so you want to not trick people. That's not what it is, but you do want to make sure you're getting the most accurate information. And so there are two main types of screening questions. One is a behavioral screening question. you know, are you in the market to buy a toothbrush? Maybe you wouldn't ask that one, but, something where you're asking about the behavior.

So how often are you watching TV or, you know, it could be something around, you know, what type of pet do you have, if any, and you're asking about cats and dogs and, and all these other kinds of animals? That kind of a question where you've got a multi-select. And your answer options, you know, picklist is a little bit better for screening than a yes or no question because most of the time you're after the yes answer.

And so professional survey takers will be on to you there. The other type of screening question is an industry screener. And I see this a lot of times in B2B research, but also B to C, especially when you're testing new things that are a little bit more sensitive. You don't want it really getting out there, and you don't want a competitor taking your survey or someone who's in the same industry. And so you might ask a whole list of industries, which of the following do you work in or do you have a family member works in any of these and you screen out for people who work in the same industry as you.

I see this a lot and, you know, major CPG. companies that are doing research. So if you've got, someone in your family who works in beauty care and your surveys all about this new breed, beauty product, you don't want, someone, reading your survey and getting onto your next innovations.

Louis: Okay. Yeah. So yeah, this is an important, very, very important point on-screen questions, that I, that I found the hard way. We just have a few minutes and I wanted to talk to you about another topic. That to me is very, very misunderstood, but I think you have the knowledge to explain it, simply, it's the sample size, right.

This is something quite, counter-intuitive when you think about it, because, when, when you reach a certain threshold for something for your sample size for your survey, adding 500 more, respondents, even. 10,000 more respondents won't actually move the needle that much. You're still going to have a, I'm going to forget the fucking metric name.

But you're going to have high confidence regardless. I'm absolutely battering this for you, but you're going to explain it much better. So tell me more about this concept of sample size, what to watch out for.

Morgan: Yes. And [inaudible] is a great sample size calculator because I don't even know that I would be able to tell you the exact formula for calculating a sample size based on the confidence level and margin of error you're comfortable with, but we've got a great one that you can use.

But the way that I like to explain sample sizes, it's a function of your population size. So say you are launching a, say it's a kid's toothbrush, and so you want to, you know, the people buying this kid's toothbrush are going to be parents of children of a certain age. And so you, there's probably some estimation of how many parents of a certain age are there in say the United States.

And that's a pretty big number. but your sample size for that group of people probably doesn't need to be as large as a sample of the entire US population, because what you're looking at is for something that's a lot more specific. It becomes much more clear when you say you're targeting something like neurosurgeons, where there aren't that many neurosurgeons in the US.

So in order to sample enough to have answers, that will be representative of that population's beliefs, you just don't have, to survey as many people. So sample size is a function of the population size. It's a sample, it's a function of the margin of error you're comfortable with. And you may, most people have been exposed to the margin of error when it comes to polling results in the news.

So in most cases, they'll say, Oh, plus, or minus 3% around any particular metric. and what that means is we've sampled enough people that we're confident that, what the sample is telling us. So say it's, I don't know. 50% of people are aware of the Colgate brand, that the actual percentage of people aware of the Colgate brand at the population level is within 3% of that.

So anywhere from 47% to 53%, and the more people that you're sampling the tighter that margin of error is. So the more confident you are that your sample truly reflects the population. the rule of thumb, I like to give people, if you are, say it let's just use the general population. As an example, if you survey 400 people, that's usually about plus or minus 5% margin of error, if you're serving a thousand people that's plus or minus 3%, that 3% is the pretty industry standard as far as where you'd want to be, to be able to publish research, get it picked up by journalist or, or the media. So that's why you see, I mean, so many people will say, Oh, you need a thousand people, a thousand people that's because that's right where the margin of error shrinks down to plus, or minus 3%.

If you're just doing a quick survey for a gut check, or a fun, you know, just for fun survey to popup, publish on social media, you probably need a couple hundred. You might not need that full thousand. And then the other thing to think about is how hard is it gonna be to find those people? So if it's a really niche, B2B target, you probably don't need the thousand either because the population size is lower.

That, I don't know if that was as simple as you hoped it would be, but that's how I like to think about it.

Louis: No, thanks for that. And as you said, there's plenty of online calculators online, a sample size calculator online, a SurveyMonkey being, probably the simplest most straightforward, but just as a quick thing, just to describe what you described here.

Let's say we have 10 million as a population size, a 3% margin of error, you need 1067 people, as you mentioned. But if you remove, if your dense population size is 1 million, it's actually the same, very, very close number of 1066.

Morgan: So, for folks who are like mathematicians out there, it has to do with the denominator getting closer and closer to one, as the population size keeps going. We're like if at some point, you know, you're going to hit this threshold where whether your population is a hundred thousand or a hundred million, the sample size you need is, is very similar.

Louis: Okay, well, thanks Morgan again for going through all of this with me into that much detail.

I'm going to ask you one last question before I let you go. What are the top three resources you'd recommend to marketers or people listening to this podcast?

Morgan: Okay. Yeah, sure and this is funny because, you told me you were going to ask me this question and I had this book on my nightstand, but, one thing that I think is incredibly important for marketers to be able to do, especially those who are using market research in their job is to be able to then tell a story with that data.

It's helpful whether you're an entry-level marketer, trying to understand what's working, what's not, and whether you're pitching your next big campaign idea or, it's also, you know, eventually as you move into leadership roles, helping to communicate strategy in the boardroom, for example. And so a book that I love, is called Data Story by Nancy Duarte. It's a really great resource, and the other thing is, kind of a hack. You can go on YouTube and she's done a couple of talks on this topic and, you know, 10-minute videos, 30-minute videos that you can watch and get a lot of really great, tips and tricks for telling stories with data, from her.

On the book front, one thing that I'm loving right now is an app called Blinkist. And what it does is it takes business books, but not just business books and really mostly non-fiction and a lot of different categories. and it kind of turns them into this SparkNote Esque summary, both in written or audio form and so it's almost you know, my, well, what would have been my commute to work, which is now my 30 minutes to walk my dog in the morning, is, you know, an audio podcast, like the format of, you know, short summaries of really great business books. And I've been able to plow through books much faster than I would have otherwise.

And then the last I'd say is honestly, I mean, good old LinkedIn. Follow and connect with other marketers that are in your field or your home city, start building relationships, start building a following. and then you can kind of start to create this. What I've gone through a couple of workshops at this, but a personal board of directors, both inside and outside of your company, through the relationships that you build over time, over the course of your career, to help you and guide you through decisions or job changes or, you know, just even, you can even ping people for feedback like you would some qualitative research.

So LinkedIn is just an incredible resource for connecting with people and building relationships, and honestly, Louis that's one of the places that I have found and heard of you, so comes full circle.

Louis: Exactly. Yeah. Great recommendations. Blinklist is also something that I've been starting to use recently.

I like to do it by when I've read a book a long time ago or, or listen to it. I like to reread the summary to remind me of the essence of it. So yes, a great recommendation all around. I had never heard of the first book you mentioned, so I'm going to check it out and again, Morgan, you've been a pleasure.

Thanks so much for going into the details, of survey making and design. and I talk to you soon then.

Creators and Guests

Louis Grenier
Host
Louis Grenier
The French guy behind Everyone Hates Marketers
How to Use Surveys to Get the Data You Need, When You Need It
Broadcast by