Marketing has long worked on the assumption that there are two paths to understand people and markets — often described as “nuance versus numbers.”
Nuance wants to know how people feel, how brands are conceptualized and understood, what unknown drivers cause human behavior and what drives business success. Numbers want to know exactly how big our markets are, what people buy, what price they pay, what the path they took and what drives business success.
The difference between these two points of view is most pronounced in Insights and Market Research, where individuals proudly describe themselves as “quant researchers” or “qual researchers.” Now, I do understand the need for specialization, but sometimes we forget that the subjects of the research — people, products and brands — are the same.
Here’s a thought experiment: Imagine you are a brand researcher, and I give you a file of one million tweets. Your job is to extract insight from this cache and use it to move the business forward. I then ask: “Are you doing quant or qual research?”
Zen koans and the art of inductive/abductive reasoning
The question is a koan because both answers are equally right and equally wrong. And, like many good koans, the solution is to un-ask the question. Do that, and you’ll see that the assumption of the question — the duality of the two types of research — is what is wrong.
One reason this thought experiment trips us up is that we associate qual with depth and quant with scale. Yet here the data has both depth (the messiness of random human thoughts) and scale (the million observations).
Dig deeper: What is predictive analytics?
To get nuance, we need depth. We need to interact with individuals and use observations to spot patterns and then form our hypotheses and theories. This inductive reasoning is the core of most qualitative approaches. For it to work, we need depth — 20 questions on a questionnaire won’t cut it. We need to probe and dive. That is expensive to do, and complex and expensive to synthesize. This is why Qual is usually associated with small sample sizes.
For numbers, we need scale — the more appropriate the data, the better. We then apply analytics and deductive reasoning. We start with a theory and hypotheses and use (frequentist) statistical methods to confirm them (or not). These statistical techniques require large sample sizes and consistent data, which is why the technologies and platforms to support these have focused on Quant research as an automation and repeatability exercise. That is why Quant research has historically been dominated by the problem of scale.
The problem with dichotomies
You will often see tables like the following offered to summarize the differences:
MethodQualitativeQuantitativeOutcomeNuanceNumbersApproachSynthesis / Inductive ReasoningAnalysis / Deductive ReasoningChallengeDepthScale
We accepted this division not because it was ideal, but because it was necessary. That said, many proponents of mixed-mode methods combine the approaches and build workflows that use the best of both.
The rise of abductive reasoning is changing the quant/qual dichotomy. In abductive reasoning, we go beyond just looking for patterns and ask, “What is the most likely explanation for this (surprising) observation?” In qual, many researchers use what is known as an “abductive loop” where they gather data, notice a surprise, form the best explanation, then gather more data. In a similar way, quant methods now often go beyond frequentist approaches (hypothesis testing) to Bayesian methods, where data is used to successively update our beliefs on what is going on.
Dig deeper: How to make sense of all that marketing data
However, both are still beholden to the depth/scale dichotomy: Abduction requires deep observations; Bayesian relies on data volume.
Bringing it all back home
Fortunately, new technologies and models allow us to have depth at scale. It wasn’t that qual needed small sample sizes — it was simply too expensive to collect the observations and harder to synthesize the results if there were too many subjects. Small sample sizes are not a definition of qual. They are just a technological limitation.
Qual data collection at scale is now completely feasible (either as primary interviews or as mineable data — such as the tweet example), thanks to newer AI approaches. Specifically, Generative AI/LLMs can perform computational abduction — meaning it has the reasoning ability to perform the necessary synthesis.
The qual/quant divide will fade over the coming years as we realize that the old depth vs. scale dichotomy is no longer a limitation. We will see a shift in how data is gathered, collected and how insights are derived — without any trade-offs between depth and scale made. We will no longer define our research by these binary limitations, but instead derive the truth about our customers, products and brands from simultaneous depth and scale.
Fuel up with free marketing insights.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.