Published on: November 26, 2024 by Kirk Wakefield
When we ask fans, “Which of these brands do you prefer to use in the future?” responses are captured on 0–100 slider scales. These scales are anchored by four descriptive labels (adapted from the BAV Group):
Never → If no other choice → One of several → The one I prefer
From 2,206,516 fan responses, we’ve analyzed brand preference scores for 2,019 sponsors and their competitors. The data shown below illustrates how good survey questions do three things:
Reliable questions yield consistent results over time, unlike formats such as unaided recall, rank-order, or select-one. For example, asking fans “What wireless brands come to mind?” or to “Name your top three movies” can produce varying answers from month to month—you might forget one or discover a new favorite.
In contrast, rating specific options on a consistent scale delivers stability. If people were asked to rate Gladiator, Gladiator II, and Wicked 1, on a 1-10 scale we would find on average:
By asking the right questions, we gain deeper, more reliable insights into consumer preferences, helping sponsors and competitors alike make informed decisions.
Data collected using scales (like rating something from 1 to 10) can be analyzed with advanced methods to find patterns and relationships. In contrast, data from simple questions like picking one option (nominal) or ranking items in order (ordinal) can’t be analyzed as thoroughly or flexibly.
Bad questions
Leading and loaded questions find few if any responses in disagreement because doing so makes no sense. Some examples:
Leading questions contain the answer in the question itself. Any question starting with “I am more likely to” is a leading question.
Loaded questions contain bias, assumptions or information that can influence responses. The question design subtly pushes toward the desired response rather than an impartial, honest answer.
Double-barreled questions ask about two or more issues. Respondents might be positive about one but negative about the other. For example, post-game surveys often ask attenders to rate ease of entrance and exit from parking areas on one scale.
Each of these lead to skewed data, reduced credibility, as well as ethical concerns for knowingly fielding faulty questions.
Copyright © 2024