Marketing Research SIG Archives /ama_cohort/research-sig/ The Essential Community for Marketers Mon, 02 Feb 2026 20:23:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2019/04/cropped-android-chrome-256x256.png?fit=32%2C32 Marketing Research SIG Archives /ama_cohort/research-sig/ 32 32 158097978 Call for Papers | Journal of Marketing: Analyzing Trade-Offs and Advancing Solutions to Society’s Challenges Using an Integrated Multiple Stakeholders Perspective /2025/07/09/call-for-papers-journal-of-marketing-analyzing-trade-offs-and-advancing-solutions-to-societys-challenges-using-an-integrated-multiple-stakeholders-perspective/ Wed, 09 Jul 2025 18:05:33 +0000 /?p=199259 Special Issue Editors: Pradeep Chintagunta, John Lynch, Martin Mende, Maura Scott, Rebecca Slotegraaf, and Jan-Benedict Steenkamp Increasing the ecological value of marketing research by examining the interactions among and between business actors, institutions, and systems can help make scholarly marketing research more meaningful and impactful (Van Heerde et al. 2021). Incorporating and integrating multiple stakeholder […]

The post Call for Papers | Journal of Marketing: Analyzing Trade-Offs and Advancing Solutions to Society’s Challenges Using an Integrated Multiple Stakeholders Perspective appeared first on .

]]>
Special Issue Editors: Pradeep Chintagunta, John Lynch, Martin Mende, Maura Scott, Rebecca Slotegraaf, and Jan-Benedict Steenkamp

Increasing the ecological value of marketing research by examining the interactions among and between business actors, institutions, and systems can help make scholarly marketing research more meaningful and impactful (Van Heerde et al. 2021). Incorporating and integrating multiple stakeholder perspectives and addressing the corresponding trade-offs can strengthen the rigor and relevance of an inquiry, with the potential to enrich outcomes for all stakeholders (e.g., Berry et al. 2024). 

Managers, academics, and policy makers must address social and business challenges against the backdrop of stakeholders’ divergent priorities and perspectives on important issues. Indeed, many of the world’s most pressing topics affect and are affected by multiple stakeholders in areas such as (but not limited to) the infodemics crisis, the need to deliver quality health care and financial services for all, the sustainability of the planet, the ability to effectively leverage technology, unintended consequences of marketing activities, global differences in social/political priorities, and marketing’s role in advancing human rights. Organizations and managers must navigate the needs of multiple stakeholders, including consumers, communities, customers, employees, executives, investors, and society. A stakeholder view, in which the organization focuses on the well-being of a variety of stakeholders in the value chain, can align with an organization’s other longer-term goals, such as profitability (Berry et al. 2024).

We recognize that many real-world problems combine a marketing issue for one stakeholder with financial, human resource, social, cultural, or even moral issues for another stakeholder. This contributes to the richness and ecological validity of research involving multiple stakeholders. As such, we welcome research that takes a multidisciplinary perspective as long as the marketing lens plays a key role in theorizing and analysis.

Advertisement

The special issue is not limited to a particular context, but for illustrative purposes, consider health care as an example. Consumers need affordable, high-quality health care, and communities need equitable health outcomes. A government may prioritize accessible health care for its citizens, while health care providers seek to run a profitable business with a respectable reputation. Insurers need to transparently provide coverage while containing costs. Health care employees require a reasonable workload and fair compensation. Yet, trade-offs exist that limit favorable outcomes for all stakeholders in a health care ecosystem. Given any complex ecosystem, how can marketing explore the needs, decisions, and processes of multiple stakeholders to shed light on the tensions and necessary trade-offs for all stakeholders? What trade-offs are acceptable, and what are the potential impacts of such trade-offs (e.g., positive and negative financial implications, measurable advancements toward societal goals)?

The editorial mission of the Journal of Marketing is to develop and disseminate “knowledge about real-world marketing questions useful to scholars, educators, managers, policy makers, consumers, and other societal stakeholders around the world.” Our empirical research to date has been effective in reflecting typically one or two sets of conventional stakeholder perspectives (e.g., purely consumer- or firm-focused, salesperson–customer dyad-focused).

We introduce a special issue of the Journal of Marketing focused on understanding the challenges and opportunities related to tensions and divergent priorities among multiple stakeholders, including new and relevant stakeholders.

This special issue encourages empirical research and analytical modeling that takes a 360-degree view to include new and relevant stakeholders in the research process, especially work that builds on existing stakeholders while broadening existing lenses via new stakeholder connections. We seek papers that uncover insights into how to deliver economic returns for firms while also delivering broader beneficial contributions on topics such as individual growth and well-being, societal cohesion, firm investment in organizational values, democratic success, and social challenges.

Many business questions involve various stakeholders who may have competing interests. For instance, MacInnis et al. (2020) identify key marketplace stakeholders that influence consumers and customers as including society, media, government and nongovernment organizations, and businesses, among others. As another example, the United Nations recognizes “major groups” of stakeholders as including women, children, and youth; indigenous peoples and their communities; nongovernmental organizations; local authorities; farmers; workers and trade unions; business and industry; and the scientific and technological community (United Nations, n.d.). In marketing, an integrated stakeholder perspective might consider not only consumers, frontline service employees, and retailers or other businesses but also communities where a product is produced (yet not consumed), measurable impacts on the environment or society, internal impacts on employees, behaviors of policy makers or governmental agents (e.g., Wang et al. 2021), top management teams, shareholders and investors, or the media (at the local, regional, and/or [inter-/supra-] national levels).

Key Criteria for Publication in the Special Issue

The special issue is interested in new marketing knowledge that helps address substantial and important societal and business issues, generated through the perspectives of multiple stakeholders (three or more). Multidisciplinary research is welcome though not required. Empirical research and analytical modeling are welcomed and encouraged.

Key criteria that will be used to assess a submission include:

  • Scope of the research question. We encourage research that seeks to tackle large-scale societal-business challenges rather than narrow or incremental topics.
  • Novelty of the insights.
  • The extent to which the novel insights are derived from at least three key stakeholders. New, relevant stakeholder perspectives are encouraged.
  • The magnitude of the behavioral change and/or its impact stemming from the work, such as the number of people likely to change their behavior based on the research (in the short or long term) or the number of people who may benefit from the findings if implemented. These can include managers, policy makers, nonprofits, consumers, and communities, etc.
  • The broad potential impact of the work.

Submission deadline (now extended!): July 1, 2026

All manuscripts will be reviewed as a cohort for this special issue of the Journal of Marketing. All submissions will go through Journal of Marketing’s double-anonymized review and follow standard norms and processes. Submissions must be made via the journal’s , with author guidelines available here. For any queries, feel free to reach out to the special issue editors.

Special Sessions

Everyone interested in learning more about this special issue is warmly invited to attend the Zoom webinar on Monday, December 15, 11 a.m. ET. .

References

Berry, Leonard L., Tracey S. Danaher, Timothy Keiningham, Lerzan Aksoy, and Tor W. Andreassen (2024), “Social Profit Orientation: Lessons from Organizations Committed to Building a Better World,”&Բ;Journal of Marketing, 89 (2), 1–19.

MacInnis, Deborah J., Vicki G. Morwitz, Simona Botti, Donna L. Hoffman, Robert V. Kozinets, Don R. Lehmann, John Lynch, Cornelia Pechmann (2020), “Creating Boundary-Breaking, Marketing-Relevant Consumer Research,”&Բ;Journal of Marketing, 84 (2), 1–23.

United Nations (n.d.), “Major Groups and Other Stakeholders,” .

Van Heerde, Harald J., Christine Moorman, C. Page Moreau, and Robert W. Palmatier, (2021), “Reality Check: Infusing Ecological Value into Academic Marketing Research,” Journal of Marketing, 85 (2), 1–13.

Wang, Yanwen, Michael Lewis, and Vishal Singh (2021), “Investigating the Effects of Excise Taxes, Public Usage Restrictions, and Antismoking Ads across cigarette brands.” Journal of Marketing 85 (3), 150–67.

Go to the Journal of Marketing

The post Call for Papers | Journal of Marketing: Analyzing Trade-Offs and Advancing Solutions to Society’s Challenges Using an Integrated Multiple Stakeholders Perspective appeared first on .

]]>
199259
Call for Papers | Journal of Marketing: Special Issue on Organic Marketing Theory /2025/06/10/call-for-papers-journal-of-marketing-special-issue-on-organic-marketing-theory/ Tue, 10 Jun 2025 17:27:44 +0000 /?p=196950 Special Issue Editors: Ajay Kohli, Page Moreau, Rebecca Slotegraaf, and Jan-Benedict Steenkamp Organic marketing theory provides insights into marketing phenomena, their causes and consequences, and conditions under which these causes and consequences are stronger or weaker. Theory papers frequently include a set of formal propositions describing causal relationships among well-defined constructs, and arguments in support […]

The post Call for Papers | Journal of Marketing: Special Issue on Organic Marketing Theory appeared first on .

]]>
Special Issue Editors: Ajay Kohli, Page Moreau, Rebecca Slotegraaf, and Jan-Benedict Steenkamp

Organic marketing theory provides insights into marketing phenomena, their causes and consequences, and conditions under which these causes and consequences are stronger or weaker. Theory papers frequently include a set of formal propositions describing causal relationships among well-defined constructs, and arguments in support of the propositions (what causes what, and why).

The special issue is open to “pure theory” papers advancing novel constructs and propositions (frequently, but not always, inducted from qualitative data) as well as “theory + empirical” papers that develop novel propositions, operationalize them as hypotheses, and empirically test them. Note that a very strong theoretical contribution will be expected in all types of papers. Below, we provide more details on what a strong theoretical contribution would include. We also provide guidance on the types of papers that would and would not be a fit for this special issue.

Contributions to Organic Marketing Theory

The following represent contributions to organic marketing theory. Note that some are more likely to be acceptable for the special issue than others.

  • Development of novel construct(s) reflecting marketing phenomena. Relationships with and distinctions from extant closely related constructs should be provided. A discussion of the importance of the constructs for marketing should be provided. Formal propositions linking the construct(s) to other constructs, supported by conceptual arguments, should be offered.

Novel Propositions

  • Novel propositions about the functional forms of relationships between known/existing constructs related to marketing phenomena, together with supporting arguments.
  • Novel propositions that a well-accepted general theory from another discipline does not “work” in a marketing context, together with supporting arguments (these propositions tend to be rare, but could be important if the well-accepted theory is widely used).
  • Novel propositions describing causal relationships between known/existing constructs related to marketing phenomena, together with supporting arguments (these propositions/arguments are unlikely to be accepted for publication unless a strong case for their importance can be made).

It is important for authors to be aware of extant research in order to ensure that proposed constructs or propositions are novel (and not repetitive of existing research). Novel arguments that support known relationships are unlikely to be accepted for a publication unless a strong case can be made for their importance. See key selection criteria section below.

Organic Marketing Theory and Overlap with Phenomena of Interest to Other Disciplines

At its core, marketing is about exchanges between/among entities, and people, processes, and institutions that enable and encourage exchanges.

  • Given the breadth of marketing, it is natural for marketing phenomena to overlap with phenomena of interest to other disciplines such as economics, psychology, sociology, strategy, organizational behavior, operations, information technology, among others. As such, organic marketing theory may potentially have insights of value to disciplines other than marketing.
  • Who develops organic marketing theory is not material. What is material is that the theory (or theoretical contribution) provides novel insights into important phenomena that are uniquely/primarily about marketing, ideally with relatively little overlap with other disciplines.

 What Is NOT of Interest for the Special Issue:

  • There are many different types of conceptual contributions (MacInnis 2011). The special issue is interested in organic theory, a particular type of conceptual contribution described above. It is NOT interested in other types of conceptual contributions, valuable as they may be. For example, a conceptual paper that summarizes extant empirical evidence to derive conclusions is a review paper (empirical papers with this goal are meta-analyses). While these types of papers are beneficial, they are NOT aligned with this special issue. Similarly, a paper that advocates for a particular position on an issue would NOT be a good fit with the special issue.
  • Relatedly, the special issue is NOT interested in work that takes one or more theoretical propositions from another discipline and applies them in a marketing context to generate new insights about marketing phenomena. While the work may well be important and eminently publishable, it would not be a good fit with the special issue because it would not reflect original theory but rather the application of an existing theory. For example, the following two cases would NOT be a good fit with the special issue:
    – Straight application: Taking a theoretical proposition in another discipline and applying the proposition to a marketing context to explain or predict marketing phenomena.
    – Adapted application: Adapting constructs in a theoretical proposition in another discipline and applying the adapted/modified proposition to a marketing context to explain or predict marketing phenomena.

Key Criteria for Publication in the Special Issue

Key criteria that will be used to assess a submission include:

  • Novelty of the insights.
  • The extent to which the novel insights are organic (i.e., uniquely/primarily about marketing phenomena). In general, novel constructs and propositions that are uniquely/primarily about marketing phenomena are more organic.
  • The extent to which the novel constructs and/or propositions are different from those available in other disciplines.

Importance of the novel insights will be assessed by:

  • The number of people likely to change their behavior based on the research (in the short or long term). These include managers, public policy makers, consumers, and other marketing academics.
  • The magnitude of their behavioral change and/or its impact.
  • The standing or position of the persons who will likely change their behavior (as an indicator of the impact of their behavioral change).

Examples of Papers Advancing Organic Marketing Theory

The following papers, listed in order of publication date, are some exemplars of organic theory building research. The list is not exhaustive, but it provides concrete examples that span both time (four decades) and subfields in marketing (strategy, consumer behavior). While these papers are challenging to write, their citation counts reflect their significant impact on the field. (Citation counts are from Google Scholar as of May 30, 2025.)

  1. Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry (1985), “,” Journal of Marketing, 49 (4), 41–50. Develops a new theory of service quality (GAPS model). 48,729 citations.
  2. Zeithaml, Valarie A. (1988), “,” Journal of Marketing, 52 (3), 2–22. Uses means-end chain theory developed in marketing to develop a new theory on the relations between price, quality, and value with propositions. 35,504 citations.
  3. Aaker, David A. and Kevin Lane Keller (1990), “,” Journal of Marketing, 54 (1), 27–41. Develops a theory to explain when brand extensions are more likely to be positively evaluated by consumers. 7,146 citations.
  4. Kohli, Ajay K. and Bernard J. Jaworski (1990), “,” Journal of Marketing, 54 (2), 1–18. Develops the construct of market orientation and advances propositions about its antecedents and consequences. 16,525 citations.
  5. Keller, Kevin L. (1993), “,” Journal of Marketing, 57 (1), 1–22. Develops a theory of brand equity and advances propositions linking it to brand awareness and brand image. 31,433 citations.
  6. Aaker, Jennifer L. (1997), “,” Journal of Marketing Research, 34 (3), 347–56. Develops the novel construct of brand personality and its five dimensions. 17,098 citations.
  7. Fournier, Susan (1998), “,” Journal of Consumer Research, 24 (4), 343–73. Develops a theory of consumer-brand relationship quality. 14,240 citations.
  8. Brakus, Joško, Bernd H Schmitt, and Lia Zarantonello (2009), “” Journal of Marketing, 73 (3), 52–68. Develops the novel construct of brand experience and advances propositions linking about its consequences. 8,363 citations.
  9. Lemon, Katherine N. and Peter C. Verhoef (2016), “,” Journal of Marketing, 80 (6), 69–96. Develops the construct of customer experience, its component stages, and contributing touch points. 8,494 citations.
  10. Molner, Sven, Jaideep C. Prabhu, and Manjit S. Yadav (2019), “,”&Բ;Journal of Marketing, 83 (2), 37–61. Develops the novel construct of market scoping mindset and advances propositions linking it to its consequences.
  11. Siebert, Anton, Ahir Gopaldas, Andrew Lindridge, and Cláudia Simões (2020), “,” Journal of Marketing, 84 (4), 45–66. Develops a novel typology of customer journeys and advances propositions linking them to its consequences.
  12. Burchett, Molly. R., Brian Murtha, and Ajay K. Kohli (2023), “,” Journal of Marketing, 87 (4), 575–600. Develops the novel construct of secondary selling and advances propositions linking it to its consequences.

Submission deadline (now extended!): October 1, 2026

All manuscripts will be reviewed as a cohort for this special issue of the Journal of Marketing. All submissions will go through Journal of Marketing’s double-anonymized review and follow standard norms and processes. Submissions must be made via the journal’s , with author guidelines available here. For any queries, feel free to reach out to the special issue editors.

Reference

MacInnis, Deborah J. (2011), “,”&Բ;Journal of Marketing, 75 (4), 136–54.

Go to the Journal of Marketing

Advertisement

The post Call for Papers | Journal of Marketing: Special Issue on Organic Marketing Theory appeared first on .

]]>
196950
Unpacking the Instrumental Variables Approach /marketing-news/unpacking-the-instrumental-variables-approach/ Thu, 22 Aug 2024 21:19:10 +0000 /?post_type=ama_marketing_news&p=167762 This piece draws on the latest research to help marketing scholars (1) identify confoundedness concerns in their empirical context, (2) transparently discuss the assumptions that need to hold for the instrumental variable approach to be valid in uncovering a causal effect, and (3) evaluate the plausibility of these assumptions.

The post Unpacking the Instrumental Variables Approach appeared first on .

]]>
Marketing scholars and practitioners frequently encounter causal questions related to strategic marketing decisions. Examples of such decisions include pricing, advertising, market entry, product development, brand positioning, contractual choices, and distribution decisions, to name a few. The factors and rules shaping strategic marketing decisions are often not fully observed by the researcher. When unobserved factors are also associated with the outcome of interest, this confounding relationship, known as the omitted variables or common causes issue, hinders the identification of the causal relationship of interest. For example, when estimating a causal impact of advertising spending on demand, researchers may find that both advertising spend and sales of a product are driven by the product’s inherent market potential. Unobserved confounders create an identification challenge that empirical marketing scholars and practitioners often face when answering causal questions related to marketing decisions.

Download Article

Get this article as a PDF

While randomized controlled experiments may be viable in some cases to address this issue, for many important strategic decisions, experimentation may not be feasible or ethical. In such cases, researchers use quasi-experimental approaches. provide an excellent overview and detailed guidance for a variety of quasi-experimental methods, including the instrumental variable (IV) approach. review the history of marketing literature on the use of various quasi-experimental methods and identify instrumental variables as the most common method.

Despite the broad use of the IV approach, the nature of confoundedness in a particular empirical context and how the proposed IV addresses it is, unfortunately, not always as clear as it could be. In this piece, we build on existing work to provide marketing scholars and practitioners with a resource that we hope will help them (1) identify confoundedness concerns in their empirical context, (2) transparently discuss the assumptions that need to hold for the IV approach to be valid in uncovering the causal effect, and (3) evaluate the plausibility of these assumptions.

Advertisement

We hope researchers use this piece not as a checklist but to critically assess whether the IV approach is appropriate given their research question and data. The IV approach has been used across markedly different empirical contexts and to answer a wide range of causal questions in the marketing literature. For example, recent articles using the IV approach examine the impact of review variance on demand (), the role of television advertising in satellite operators’ commercial success (), the influence of pictures on the engagement with social media posts (), and whether money-back guarantees can serve as a signal of quality (), to name a few. Each of these questions and empirical contexts presents a unique identification challenge, and the validity of the IV approach in each case has to be established based on a unique set of facts and arguments. Papies, Ebbes, and Feit (2023, p. 281) note, “One of the main lessons from history is that there are no easy turn-key solutions to an endogeneity problem. Each of the methods used to address endogeneity in observational data relies on assumptions. It is critical that researchers using these methods carefully assess these assumptions in the context of their research question and data.” We hope that this piece will aid readers in this regard.

The Setup

Figure 1 presents a directed acyclic graph (DAG) demonstrating the omitted variable bias problem arising from unobserved confounders.[1] The causal impact we want to identify is the influence of treatment D on outcome Y (D → Y). For example, we might be interested in the impact of price (D) on sales (Y) for diet soda. The observed common causes W and the unobserved common causes U (unobservability to the researchers is represented by dashed lines) impact both D and Y. These sets of variables are referred to as confounders for the direct causal relationship of interest, because they lead to an association between D and Y even in the absence of a causal relationship. For example, we might imagine that a manager’s sales expectations may be associated with both the pricing decision and the sales outcomes. Typically, expectations are unknown (an example of U). The price also responds to input costs, which may also be associated with demand (an example of W if observed). For example, in the case of diet soda, aspartame prices may go up when the demand for the diet soda category increases. It is relatively straightforward to deal with confounding due to W by conditioning on W, since these variables are observable.[2] The main identification challenge to uncover the causal impact D → Y arises from the existence of unobserved confounders U.

Figure 1: DAG Representation of the Omitted Variable Bias and IV Approach.

Figure 1 also includes a variable Z, which has a mediated pathway from Z to Y. In the case of a randomized controlled trial, Z indicates the random experimental assignment to treatment and control arms. Alternatively, Z can refer to an IV, which is not experimentally randomized, but is “as good as random” for the purposes of identifying D → Y. Either way, if five assumptions, which we detail below, are satisfied on the causal chain from Z to Y, Z can be used to identify the D → Y relationship.

Assumptions

For Z to be a valid IV for identifying the causal effect of D → Y, five assumptions need to be satisfied:

  1. the independence assumption,
  2. the stable unit treatment value assumption (SUTVA),
  3. the inclusion restriction,
  4. the exclusion restriction, and
  5. the monotonicity assumption.

The plausibility of these assumptions determines whether the IV approach is suitable for a specific research question within a given institutional context. Next, we explain these assumptions and detail their implications in the context of an example to build intuition. For more technical coverage, we recommend readers consult econometrics textbooks and articles, such as ,, , , and , among others.

Five Assumptions

The first assumption, called the independence assumption, also known as the ignorability or exchangeability assumption, stipulates that the assignment of different levels of Z to different units (e.g., firms, individuals) is as good as random. In our DAG, this means that Z is not associated with U or W (indicated by the fact that no edge is drawn between Z and these variables in Figure 1). We can weaken this assumption to conditional independence. For example, if there were an association between Z and W, conditional independence could be achieved by controlling for W.[3] One can alternatively think of this assumption as stipulating that either there is no selection into Z, and if there is selection, it is only on observables. The researcher needs to defend the plausibility of this assumption, which is most easily done when the instrument Z is indeed randomized. Otherwise, why believe the conditional independence of Z, but not D? The answer will depend on the institutional context, as we discuss below.

The second assumption, the stable unit treatment value assumption (SUTVA), states that the value of unit i’s instrument or treatment does not affect other units’ potential outcomes (i.e., there are no unmodeled spillovers or interference). This assumption permits us to write Di(Z) = Di(Zi) and Yi(Z, D(Z)) = Yi(Zi, Di(Zi)). This is not a trivial assumption, and researchers should contemplate its plausibility carefully. In practice, SUTVA violations can occur due to many reasons, including general equilibrium effects, anticipation, contagion, information spillovers, social comparisons, externalities, and network effects, among others. Even in the context of a randomized experiment, SUTVA may be violated. For example, if an instrument (or experimental manipulation) varies the prices of a subset of firms, it can also have an impact on the prices of untreated firms through competitive pricing (see, e.g., ).

When the SUTVA and independence assumptions are satisfied, the IV is referred to as being exogenous. Many marketing papers focus on the independence assumption and fail to consider SUTVA in discussing the validity of their IV. Moreover, it is common for marketing papers to refer to the independence assumption as the exclusion restriction—a confusion that may be driven by the necessity of the independence assumption in establishing exogeneity. We discuss the exclusion restriction below and highlight the need to discuss the plausibility of all five assumptions if the objective is to identify the causal effect D → Y.

It is important to note that the independence and SUTVA assumptions are sufficient for Z → Y to be interpreted causally. In the IV literature, this causal effect is called the reduced form effect. Sometimes, the reduced form effect suffices when the research question can be satisfactorily answered by identifying the causal effect of Z on Y.[4] However, if the causal effect of interest is D → Y, three additional assumptions we discuss below are necessary for the identification of the local average treatment effect (LATE), which is defined as the causal effect D → Y among units for which Z had an effect on D. The IV approach can only identify the “local” effect of D on Y for units whose treatment status can be manipulated by the instrument, and not the average treatment effect (ATE) among all units. This has implications for generalizability of the identified causal effect, which we return to in our concluding remarks.

The third assumption, called the inclusion restriction, also referred to as the first-stage assumption or relevance assumption, is that the instrument Z must influence D. This assumption is the only one among the five assumptions that can be empirically verified. It is expected that papers provide empirical evidence for the association between Z and D and assess the strength of that relationship by testing the coefficient on the instrument in the so-called first-stage regression of the treatment on the instrument. We return to the issue of power of the first stage and inference with weak instruments in our concluding remarks.

The fourth assumption, the exclusion restriction assumption, indicates that any effect of the instrument Z on outcome Y is exclusively through its effect on D. This assumption is depicted by Figure 1 indicating that there is no other impact of Z on Y once we condition on the value of D. If the model is overidentified (i.e., there are more instruments than endogenous variables), then an overidentification test (e.g., Sargan–Hansen test) can be performed to test whether all instruments are uncorrelated with the 2SLS residuals. However, these tests require a constant effects assumption that is often difficult to defend, and therefore overidentification tests are not commonly used (see, e.g., ). Instead, researchers establish logical support for this assumption using institutional details and falsification tests, as we discuss in the next section.

As we noted previously, marketing studies sometimes confuse the exclusion restriction with the independence assumption. Thus, it is important to highlight the differences. While the independence assumption is about the instrument (Z) not being correlated with unobserved confounders (U), the exclusion restriction states that the instrument does not impact the dependent variable (Y) except through its effect on the endogenous independent variable (D). For example, in using a cost shifter as an IV for price in estimating the impact of prices on demand, the cost shifter satisfies the exclusion restriction if the only effect that the cost shifter has on demand operates through its effect on price. Thus, not all cost shifters will satisfy the exclusion restriction, a point we return to when discussing the role of the institutional context in assessing the plausibility of the assumptions.

The fifth assumption, called the monotonicity assumption, is the final assumption necessary in the identification of the LATE when D → Y is heterogeneous across units (; , ; Imbens and Angrist 1994).[5] The monotonicity assumption requires that a hypothetical change in Z either has no impact on a unit’s treatment status D or changes it in the same direction as it does for all other units on which it has an impact. Let’s consider a binary Z and a binary D for illustrative purposes. In the language of the potential outcomes model (), compliers are units whose behavior is impacted by the instrument. For compliers, assume D = 1 when Z = 1 and D = 0 when Z = 0. Defiers are those for whom the instrument has the opposite effect as compliers; for them, then, D = 1 when Z = 0 and D = 0 when Z = 1. Always- takers and never-takers are not impacted by the instrument; for always-takers, D = 1, and for never takers, D = 0, regardless of Z. Thus, always-takers and never-takers do not inform the IV estimate. The monotonicity assumption indicates that we can have either compliers or defiers, but not both.[6] The monotonicity assumption would be violated, for example, when a nudge (e.g., antismoking ads) works in the expected direction for some but causes a backlash reaction for others. Without making further assumptions, it is not possible to empirically verify the monotonicity assumption.[7]

A Demonstration

Authors need to translate the implications of these five assumptions into their empirical context and convince the reader of their plausibility. To demonstrate, we discuss the example offered by , who study the impact of employee gender diversity on venture capital firm performance. To identify this causal effect, the authors use the sex of venture capital partners’ children as an instrument for the decision to hire a woman. The inclusion restriction conjecture is that partners who have more daughters are more likely to hire women. The authors discuss conceptual reasons for this relationship and also present empirical evidence of the first stage, which we recommend all papers using the IV approach do.

In this context, SUTVA requires that the hiring decisions and financial performance of a partner are impacted only by the number of daughters the partner has, and not by the gender of the children of other partners. This assumption can be violated in several ways. For example, if the supply of qualified female employees is extremely tight, then the increased interest in hiring female employees due to the gender composition of one firm’s partners could impact the hiring and/or financial performance of a competing firm.

The independence assumption is equivalent to assuming that whether partners have sons or daughters (conditional on having children) is as good as random. If certain parents (e.g., those who hold different gender views) employed a gender-based stopping rule (e.g., keep having children until they have at least one son), the independence assumption would be violated. To defend against this particular concern, the authors provide evidence that a first-born daughter does not predict the total number of children, which constitutes an example of a falsification test—a concept we discuss below.

The exclusion restriction in this example necessitates that the genders of partners’ children do not have an impact on the venture capital firm’s performance except through the impact on gender diversity in the firm. The authors recognize that if having more daughters directly improves a partner’s skills in a way that increases their ability to source or close deals, this assumption would be violated. They provide evidence that venture capital partners with more daughters do not have more successful deals, which is another example of a falsification test.

Finally, the monotonicity assumption necessitates that we can only have partners for whom having more daughters would either increase their likelihood of hiring women or not impact it, but we cannot have partners for whom having more daughters would decrease the likelihood of hiring women. This assumption would be violated, for example, if, for a minority of partners, having daughters reinforces sexist views of the workplace.

We summarize this discussion in Table 1. The table also includes as another working example to demonstrate the assumptions required in using political advertising cycles as an instrument for advertising spend. For a more detailed discussion of potential violations of the exclusion restriction and monotonicity assumption in using political cycles as instruments, and the important role time and market fixed effects play, see . As these examples make clear, the plausibility of the identifying assumptions needs to be defended with institutional details and supporting empirical patterns. If the assumptions cannot be credibly defended, researchers should not use the IV approach.

Table 1. Summary of Assumptions in the IV Approach
AssumptionCritical QuestionCalder-Wang and Gompers (2021)Sinkinson and Starc (2019)
Independence assumption: The instrument Z does not share common causes with the outcome YAre there any omitted variables that determine Z and Y? What allows us to claim that Z is as good as random (when X is not)?
Gender of partner’s children is determined by nature, and thus independent of firm performance.
The variation in political ad spending is independent of the variation in statin demand conditions.
SUTVA: A unit’s response to its own value of the instrument Zi does not depend on the value of the instrument for other units Z−i.Is it possible that there are spillovers or interference among different units?A partner’s hiring decisions and financial performance is not impacted by the gender of other partners’ children.Advertising and sales of a pharmaceutical company in a market-month does not respond political ad spending in other markets and months.
Inclusion restriction: The Instrument Z must influence the treatment D, either in a positive or negative manner.Why do we expect D to respond to Z? (Note: this is the only assumption for which we can provide empirical evidence)Partners who have more daughters are more likely to hire women.Increases in political advertising spending displace other types of advertising.
Exclusion restriction: The effect of instrument Z on outcome Y operates only through the effect of Z on D.Can Z influence Y through other channels (direct or indirect) that are not through D?Gender of partners’ children do not have an impact on firm performance except through its impact on hiring.Political advertising cycle has no other effect on pharmaceutical demand except through its impact on advertising decisions.
Monotonicity: The impact of the instrument Z on D across units of analysis is (weakly) in the same direction.Do compliers and defiers coexist?Having more daughters would (weakly) increase the likelihood of hiring women for all partners.All statin manufacturers (weakly) decrease advertising when political ad spending increases.

Disclaimer/Warning: The goal of this table is to summarize the discussion in the “Assumptions” section. It should not be used as a template or a checklist. It is not intended to support, not replace, critical engagement with the necessary identifying assumptions in a given empirical context.

Evaluating and Defending the Plausibility of Assumptions

None of the identifying assumptions we discussed above, except for the inclusion restriction, can be empirically validated. Therefore, they must be logically established and defended based on common sense, subject matter arguments, and institutional details. Goldfarb, Tucker, and Wang (2022, p. 5) suggest that “the objective for the authors is to pursue projects only when they can convince themselves (and their readers) that the causal interpretation is more plausible than other possible explanations. It is impossible to prove the validity of a quasi-experiment. … The credibility of any quasi-experimental work therefore relies on the plausibility of the argument for causality rather than on any formal statistical test.” In our assessment, many of the published papers in marketing using the IV approach do not offer a detailed-enough discussion of the implications and plausibility of the required assumptions in the empirical context they study. When they do, the discussion tends to focus mainly on the relevance and independence assumptions. In addition to providing a discussion of other assumptions, we suggest that researchers treat identification as a central part of the manuscript’s narrative, using institutional details and theory to tie together elements that make the research question important and the identification valid. Thus, in this section, we discuss approaches researchers can take to evaluate the plausibility of all the assumptions in the IV framework to develop a cohesive and convincing story.

Institutional Details

Subject matter arguments based on institutional knowledge are paramount to judging whether the required assumptions are plausible and, consequently, whether causal inference can be achieved. In many quasi-experimental papers, identifying assumptions are justified solely by subject-matter arguments that use institutional details. The clarity with which the authors translate the identifying assumptions to their context and provide detailed discussion of institutional details that help the reader evaluate these assumptions go a long way in convincing the reader of a causal relationship.

Institutional context makes or breaks an instrument. An instrument that satisfies an assumption naturally in some contexts may blatantly violate it in others. To illustrate, let’s consider cost-based instruments, which are commonly used in marketing and industrial organization to obtain the causal impact of prices on demand. Do input costs as an instrument satisfy the exclusion restriction? The answer depends on the institutional details. For example, consider using orange wholesale prices as an instrument for orange juice prices when estimating demand for orange juice in Michigan. Imagine that a drought in Florida pushed up orange wholesale prices across the nation. It might be relatively straightforward to defend that this input cost variation is plausibly exogenous to demand conditions for orange juice in Michigan. Now, instead, consider the cost of steel as an input for automobile manufacturing. The automotive industry accounts for 10%–15% of global steel use, and automobile production levels are known to impact steel prices. Therefore, the demand for automobiles may have an impact on steel prices. Alternatively, both steel prices and demand for automobiles may be impacted by the strength of the economy. In this context, it may be hard to refute that steel prices have no association with consumer demand for automobiles except through their impact on car prices.

Another illustration of the institutional context mattering for the validity of an instrument comes from examiner designs (also called judge fixed-effect design, or leniency design). In these designs, there is an examiner who has discretion in determining the outcomes (e.g., a judge in a hearing, a grader in a class, or a consumer in responding to a satisfaction survey), and there is systematic heterogeneity in their judgments (e.g., some judges being systematically more lenient than others, some consumers being generally more grumpy). Similar designs have been adapted to study marketing questions (e.g., Lee, Bollinger, and Staelin 2023; Li and Xie 2020). In cases where the assignment of the examiner is as good as random, we can consider the identity of the examiner as an instrument for the treatment whose effect we are trying to examine. For example, consider being interested in the impact of review valence on product sales, and assume that certain consumers are systematically more negative in their review behavior and that the arrival of consumer types (in terms of their overall negativity) is random. We could imagine using the identity of the consumer as an instrument for review valence. In this context, is the monotonicity assumption satisfied? It depends. Monotonicity holds whenever any product that would have received a 4-star rating from a generally negative consumer receives a 4-star or 5-star rating from a happy-go-lucky consumer. It is violated if the happy-go-lucky consumer sometimes rates products worse than the generally negative consumer. In this context, the researcher may argue that the assumption is more likely to hold within a product category, rather than across categories. However, even within a product category, consumers may vary in what makes them unhappy. For example, the happy-go-lucky consumer might be generally positive, except in cases where the product arrives damaged, in which case they switch and become even more negative than the generally negative consumer. The institutional context matters greatly in the researcher’s ability to make a case for (or refute) the likelihood of this scenario.[8]

There are many papers that bring institutional details expertly into the narrative and use them to lay out the rationale for the plausibility of the identifying assumptions. For example, consider , who examine whether veterans who have experienced more combat exposure are more likely to have negative life outcomes postdeployment (education, financial heath, suicide, incarceration, etc.). They clearly explain the institutional process of brigade assignments in the U.S. Army to support the relevance of their instrument and defend the (conditional) independence assumption. In making a case for the plausibility of the exclusion restriction against one potential criticism, Sinkinson and Starc (2019) point out that detailing spending levels are set at the annual level and therefore cannot be quickly adapted at the market level in response to TV ad spending declining. Similarly, in examining peer effects on salesperson quitting behavior using the IV approach, argue that the management’s evaluation of the salesperson in the first month of them joining the firm satisfies the independence assumption due to its private nature. We recommend that all authors think through the institutional details when specifying a causal model and picking an instrument, and communicate these details to their readership in the context of the five identifying assumptions we have discussed.

Falsification Tests

An important benefit of specifying the identifying assumptions of a causal research designs is that these assumptions often have falsifiable implications. Although identifying assumptions cannot be verified empirically, researchers can conduct tests for these implications to check whether they can be empirically refuted (). These tests are called falsification tests. Failed falsification attempts do not prove the assumption they are designed to falsify, but can help build the case for the plausibility of the identifying assumptions. In this section, we aim to build intuition for coming up with falsification tests by sharing examples of tests used by researchers across a variety of areas.

In devising a falsification test for the independence assumption, researchers often check for balance in observables across levels of Z. The idea here is that if Z is as good as random, then we would not expect any systematic differences in the means or distributions of pretreatment covariates across different levels of Z.[9] At other times, the researchers evaluate a plausible threat to the independence assumption. We discussed one such example in the case of Calder-Wang and Gompers (2021). Another example refuting a threat to the independence assumption comes from . The authors conduct a field experiment to show that tweets by a media company on Sina Weibo about its TV shows increase the viewership of these shows. The experiment assigns a subset of shows to receive company tweets. The authors want to refute the possibility that the assignment of shows was somehow correlated with unobservables that drive show viewership (e.g., show popularity). To do so, they exploit the fact that tweeting happens in a single point in time, but the TV show airs at different times across geographic areas. Under the independence assumption, we would not expect any impact of treatment when tweeting happened after the show airs in a geography. If the assignment correlated with unobservables, however, we would expect a higher viewership of shows even if tweeting happened after the show airs. This constitutes a clever falsification test of the independence assumption in the paper’s institutional context.

To create a falsification test for the exclusion restriction, researchers ask what empirical pattern might suggest that Z has an impact on Y that did not operate through D alone. The falsification test Calder-Wang and Gompers (2021) offer for the exclusion restriction is an example of evaluating a particular channel by which Z may have a direct effect on Y. In most cases, falsifying the exclusion restriction involves testing the reduced form effect of Z on Y in situations where it is impossible (or, extremely unlikely) for Z to influence D. If Z has an impact on Y in these situations, it would show that Z’s impact on Y does not always operate through D. Sometimes, it may be possible to evaluate the reduced form effect among never-takers as a falsification test. For example, use the Vietnam draft lottery based on birth dates as an instrument for vulnerability to military service to study whether this vulnerability impacts political attitudes. Under the exclusion restriction assumption, birth dates should not affect (1) men’s attitudes who are exempt from the draft due to college deferrals, or (2) women’s attitudes. Thus, the reduced form effect is expected to be zero for these two groups of never-takers, unless birth dates have a direct effect on political attitudes that does not only operate through vulnerability to military service.

In other cases, it might be possible to devise falsification tests based on the fact that the exclusion restriction would predict a null effect of the instrument among the always-takers. For example, in the context of medical research, a physician’s general tendency to operate is used as an instrument for whether patients received surgery, in order to examine the impact of having surgery versus not having it on mortality. provide a falsification test that exploits the fact that certain subgroups of patients in the data are always operated on, and therefore their in-hospital mortality should not be affected by their physician’s general tendency to operate (i.e., the reduced form effect among always-takers should be zero) if the exclusion restriction holds (see also ).

Falsification tests for other assumptions can also be devised based on the particulars of the empirical context (see, e.g., ; ). Overall, falsification tests can be a useful tool in making the case for the plausibility of the identifying assumptions, but as we cautioned above, should not be interpreted as providing proof for them.

Concluding Remarks

In this piece, we provided a brief discussion of the identifying assumptions in the IV approach, focusing on the importance of assessing their plausibility in a given empirical context and possessing sufficient institutional knowledge to do so. While we focused on one approach, we hope that many of the insights are useful for researchers in thinking through other quasi-experimental approaches. As we conclude, we would like to draw the reader’s attention to a few additional items.

First, we want to emphasize the importance of being clear on the nature of confounding relationships. This clarity helps determine whether the IV approach is necessary and, if necessary, helps identify a valid IV. More generally, clearly specifying the sources of confoundedness is a useful first step in figuring out what methods are useful to deal with the identification challenges they present. Simpler methods, with fewer assumptions, might be sufficient and preferred. For instance, if the confounders vary at a level higher than the variation in D or Y, the researcher may be able to control for U with fixed effects.

Second, we would like to highlight the issue of weak instruments. Although the inclusion restriction only requires that Z is associated with D, a large body of literature, which we do not cover here, shows that weak associations can mean that the 2SLS estimate is vulnerable to bias (e.g., ; ). The weak instrument bias is often exacerbated by a large number of instruments.[10] Therefore, one strong instrument is generally preferred over using many instruments, some of which are weak (). This should also caution readers about including a large number of fixed effects as IVs in the model, especially if the nature of the confounder does not require them. To assess the strength of the instrument, researchers often use the rule of thumb that the F-statistic (on the null hypothesis that coefficients in the first stage are zero) should be 10 or larger, even though the original research this rule of thumb is predicated on offers more nuanced critical values (e.g., ; ; ). A well-known issue with using this rule of thumb is that homoskedasticity was a key assumption in the literature that produced it. In the case of one endogenous regressor and linear models, propose an effective first-stage F-statistic that corrects for nonhomoskedastic errors (e.g., clustering, autocorrelation). offer an inference approach for the single-instrument case that is robust to heteroskedasticity and clustering, which applies an adjustment factor to the 2SLS standard errors based on the first stage. This work shows that once violations of homoskedasticity are considered, the necessary critical values are larger by an order of magnitude compared with the common rule of thumb. We should also highlight that in the case of a weak first stage, not all hope is lost: researchers can use weak instrument robust inference. We refer the interested reader to the econometrics literature on weak instruments (e.g., ; ; Andrews, Stock, and Sun 2019; ) for further details.

Third, it is important for authors using the IV method to think carefully about the external validity of the results they obtain. As we have discussed, the IV approach can only identify the treatment effect among compliers. This is an unknown subset of the data, as treated units are a mix of always-takers and compliers. Furthermore, the complier group depends on the instrument used. Different instruments will lead to different estimands. So, given the instrument(s) the researcher is using, it is important to consider the specificity of the source of variation in D that is generated by Z and discuss how generalizable the results may be to other groups, situations, or times. Sometimes, the instrument only impacts the behavior of a narrow group of people who are likely to have different D → Y than the population of interest. At other times, we may not expect meaningful differences in the causal D → Y relationship between compliers and the general population. Research benefits from transparency. We recommend authors to openly discuss external validity issues and use institutional details to support any arguments of generalizability.

To conclude, strategic and nonrandom decisions by consumers, managers, firms, regulators, and other institutional actors permeate marketing problems and issues. For many such situations, the IV approach may be the right quasi-experimental approach to study research questions of interest. We hope this piece encourages the appropriate use of IVs as tools to provide valid theoretical and managerial insights.

Citation

Grewal, Rajdeep and Yesim Orhun (2024), “Unpacking the Instrumental Variables Approach,” Impact at JMR. Available at: /marketing-news/unpacking-the-instrumental-variables-approach/

Acknowledgments

The authors thank Elea Feit, Avi Goldfarb, and Xiao Liu for their helpful feedback.

References

Abadie, Alberto (2003), “,” Journal of Econometrics, 113 (2), 231–63.

Andrews, Donald W.K. and James H. Stock (2005), “,” NBER Technical Working Paper 0313, https://www.nber.org/papers/t0313.

Andrews, Isaiah, James H. Stock, and Liyang Sun (2019), “,” Annual Review of Economics, 11, 727–53.

Angrist, Joshua and Michal Kolesár (2024), “,” Journal of Econometrics, 240 (2), 105398.

Angrist, Joshua D., Guido W. Imbens, and Alan B. Krueger (1999), “,” Journal of Applied Econometrics, 14 (1), 57–67.

Angrist, Joshua D., Guido W. Imbens, and Donald B. Rubin (1996), “,” Journal of the American Statistical Association, 91 (434), 444–55.

Angrist, Joshua D. and Alan B. Krueger (1999), “,” in Handbook of Labor Economics, O. Ashenfelter and D. Card, eds. Elsevier, 1277–1366.

Angrist, Joshua D. and Alan B. Krueger (2001), “,” Journal of Economic Perspectives, 15 (4), 69–85.

Angrist, Joshua D. and Jörn-Steffen Pischke (2009), . Princeton University Press.

Bruhn, Jesse, Kyle Greenberg, Matthew Gudgeon, Evan K. Rose, and Yotam Shem-Tov (2022), “,” NBER Working Paper 30622, https://www.nber.org/papers/w30622.

Burke, Marshall, Lauren F. Bergquist, and Edward Miguel (2019), “,” Quarterly Journal of Economics, 134 (2), 785–842.

Calder-Wang, Sophie and Paul A. Gompers (2021), “,” Journal of Financial Economics, 142 (1), 1–22.

Chernozhukov, Victor and Christian Hansen (2008), “,” Economics Letters, 100 (1), 68–71.

Chyn, E., B. Frandsen, and E.C. Leslie (2024), “,” NBER Working Paper 32348, https://www.nber.org/papers/w32348 .

Cunningham, Scott (2021), . Yale University Press.

Danieli, Oren, Daniel Nevo, Itai Walk, Bar Weinstein, and Dan Zeltzer (2024), ,” arXiv preprint, https://doi.org/10.48550/arXiv.2312.15624.

De Chaisemartin, Clément (2017), “,” Quantitative Economics, 8 (2), 367–96.

Erikson, Robert S. and Laura Stoker (2011), “,” American Political Science Review, 105 (2), 221–37.

Frandsen, Brigham, Lars Lefgren, and Emily Leslie (2023), “,” American Economic Review, 113 (1), 253–77.

Gerber, Alan S. and Donald P. Green (2012), . W.W. Norton.

Goldfarb, Avi, Catherine Tucker, and Yanwen Wang (2022), “,” Journal of Marketing, 86 (3), 1–20.

Gong, Shiyang, Juanjuan Zhang, Ping Zhao, and Xuping Jiang (2017), “,” Journal of Marketing Research, 54 (6), 833–50.

Hansen, Ben B. and Jake Bowers (2008), “,” Statistical Science, 23 (2), 219–36.

Heckman, James J., Sergio Urzua, and Edward Vytlacil (2006), “,” Review of Economics & Statistics, 88 (3), 389–432.

Heckman, James J. and Edward Vytlacil (2005), “,” Econometrica, 73 (3), 669–738.

Heckman, James J. and Edward J. Vytlacil (2000), “,” Economics Letters, 66 (1), 33–39.

Holtz, David, Felipe Lobel, Ruben Lobel, Inessa Liskovich, and Sinan Aral (2024), “,” Management Science (published online April 5), https://doi.org/10.1287/mnsc.2020.01157.

Imbens, Guido W. (2020), “,” Journal of Economic Literature, 58 (4), 1129–79.

Keele, Luke, Qingyuan Zhao, Rachel R. Kelz, and Dylan Small (2019), “,” Medical Care, 57 (2), 167–71.

Kiviet, Jan F. and Sebastian Kripfganz (2021), “,” Economics Letters, 205, 109935.

Lee, David S., Justin McCrary, Marcelo J. Moreira, and Jack Porter (2022): “,” American Economic Review, 112 (10), 3260–90.

Lee, Nah, Bryan Bollinger, and Richard Staelin (2023), “,” Journal of Marketing Research, 60 (1), 130–54.

Li, Yiyi and Ying Xie (2020), “,” Journal of Marketing Research, 57 (1), 1–19.

Moshary, Sarah, Bradley T. Shapiro, and Jihong Song (2021), “,” Marketing Science, 40 (2), 283–304.

Murray, Michael P. (2006), “,” Journal of Economic Perspectives, 20 (4), 111–32.

Olea, José L.M. and Carolin Pflueger (2013), “,” Journal of Business & Economic Statistics, 31 (3), 358–69.

Papies, Dominik, Peter Ebbes, and Elea M. Feit (2023), “,” in The History of Marketing Science, 2nd ed., Russell S. Winer and Scott A. Neslin, eds. World Scientific, 253–300.

Pearl, Judea (2009), “,” Statistics Surveys, 3, 96–146.

Rossi, Peter E. (2014), “,” Marketing Science, 33 (5), 655–72.

Rubin, Donald B. (2005), “,” Journal of the American Statistical Association, 100 (469), 322–31.

Sinkinson, Michael and Amanda Starc (2019), “,” Review of Economic Studies, 86 (2), 836–81.

Staiger, Douglas and James Stock (1997), “,” Econometrica, 65 (3), 557–86.

Stock, James H., Jonathan H. Wright, and Motohiro Yogo (2002), “,” Journal of Business & Economic Statistics, 20 (4), 518–29.

Stock, James H. and Motohiro Yogo (2002), “,” Technical Working Paper 0284, https://www.nber.org/papers/t0284.

Sunder, Sarang, V. Kumar, Ashley Goreczny, and Todd Maurer (2017), “,” Journal of Marketing Research, 54 (3), 381–97.

Yang, Fan, José R. Zubizarreta, Dylan S. Small, Scott Lorch, and Paul R. Rosenbaum (2014), “,” American Statistician, 68 (4), 253–63.

Yang, Joonhyuk, Jun Youn Lee, and Pradeep K. Chintagunta (2021), “,” Journal of Marketing Research, 58 (5), 925–47.

Yu, Shan, Mrinal Ghosh, and Madhu. Viswanathan (2022), “,” Journal of Marketing Research, 59 (3), 659–73.


[1] In a directed graph, each node is a random variable, and the edges are directed, indicating causal associations in the direction of the arrow. In acyclic graphs, causality runs in one direction. To learn more about DAGs and their usefulness, see and .

[2] Care must be taken when controlling for the W → Y relationship. A model with strong functional form assumptions, such as a linear regression model, may not be appropriate. Approaches such as matching, nonparametric methods, and machine learning methods can be used instead.

[3] With all other assumptions satisfied, the two-stage least square (2SLS) with covariates estimates a weighted average of the covariate-specific local average treatment effects (LATEs). shows how to estimate the overall LATE using a weighting approach based on a “propensity score” for the instrument.

[4] The IV framework parallels the failure-to-treat issue in randomized experiments. We can think of Z as the randomized experimental manipulation and D as the indicator of whether a person is treated if assigned to treatment. In this interpretation, the reduced form effect (Z → Y) is the intent-to-treat (ITT) effect (see, e.g., ). Marketing researchers using experiments often only report the impact of the manipulation on the outcome variable, which is the reduced form effect.

[5] If the homogeneity of D → Y assumption can be upheld, then (1) we do not need the monotonicity assumption, and (2) the causal effect can be generalized to the population at large, giving us the ATE instead of the LATE. However, the homogeneity assumption is unlikely to hold in many marketing applications; therefore, we discuss the monotonicity assumption in the main text.

[6] This nomenclature is the reason why the LATE is also referred to as the complier average causal effect (CACE). See for inference in the IV approach without the monotonicity assumption.

[7] offer a test of the monotonicity assumption in a particular IV design by making additional assumptions on the ATE among those who violate the monotonicity assumption.

[8] For more details on inference in examiner designs, we refer the interested reader to . Chapter 7.8.2 of Cunningham (2021) also provides a useful discussion of the plausibility of the independence, exclusion restriction, and monotonicity assumptions in examiner designs.

[9] Of course, if the assumption is conditional independence, the balance assessment is also conditional. When assessing balance across a number of variables, instead of running numerous independent balance tests, it is better to employ one omnibus balance test (e.g., ).

[10] Intuitively, the 2SLS estimator with multiple instruments is a weighted average of the causal effects of each instrument, where the weights are related to the strength of the first-stage (Angrist and Pischke 2009).

Go to the Journal of Marketing Research

The post Unpacking the Instrumental Variables Approach appeared first on .

]]>
167762
and its Foundation Celebrate Marketing Scholars and Professionals Advancing Marketing’s Future /press-releases/ama-and-its-foundation-celebrate-marketing-scholars-and-professionals-advancing-marketings-future/ /press-releases/ama-and-its-foundation-celebrate-marketing-scholars-and-professionals-advancing-marketings-future/#respond Thu, 22 Aug 2024 13:34:22 +0000 /?post_type=ama_press_releases&p=167796 Awardees honored at the 2024 Summer Academic Conference in Boston, MA Chicago, IL—The () and the Foundation are pleased to recognize this year’s marketing award winners who were celebrated at the 2024 Summer Academic Conference held in Boston, Massachusetts, August 15-18, 2024. Advertisement “The influence of technology-powered marketing and […]

The post and its Foundation Celebrate Marketing Scholars and Professionals Advancing Marketing’s Future appeared first on .

]]>
Awardees honored at the 2024 Summer Academic Conference in Boston, MA

Chicago, IL—The () and the Foundation are pleased to recognize this year’s marketing award winners who were celebrated at the 2024 Summer Academic Conference held in Boston, Massachusetts, August 15-18, 2024.

Advertisement

“The influence of technology-powered marketing and its intersection with the global economy, geopolitics, socio-cultural issues and human behavior warrant exploration into transformative research that has potential for implementation in responsible business practices,” said CEO, Bennie F. Johnson. “We convened scholars and practitioners from more than 38 countries in Boston this past week for our 2024 Summer Academic Conference to talk about reconnecting with humanity and marketing’s role in promoting responsible technology while serving people. What an amazing and generative experience. And it was an honor to congratulate and celebrate all of this year’s awardees for their research and scholarship in the marketing profession. Their work is changing marketing’s future.”

Thanks to our 2024 conference Co-Chairs:, Fordham University,, Loyola Marymount University, and, The Tuck School of Business at Dartmouth.

This year’s winners are:

Distinguished Winner: 

Winners: 

  • Wendy De La Rosa, Abigail B. Sussman, Eric Giannella, and Maximilian Hell, “” | Proceedings of the National Academy of Sciences
  • Claudia Gonzalez-Arcos, Alison M. Joubert, Daiane Scaraboto, Rodrigo Guesalaga, and Jörgen Sandberg, “” | Journal of Marketing
  • Kristopher O. Keller and Jonne Y. Guyt, “” | Journal of Marketing
  • Jenny Olson, Scott Rick, Deborah Small, and Eli Finkel, “” | Journal of Consumer Research
  • Nathaniel Posner, Andrey Simonov, Kellen Mrkva, and Eric J. Johnson, “” | Proceedings of the National Academy of Sciences
  • Nicole Robitaille, Nina Mazar, Claire I. Tsai, Avery M. Haviv, and Elizabeth Hardy, “” | Journal of Marketing

Finalists:

  • Chris Blocker, Jon Zhang, Ron Hill, Caroline Roux, Canan Corus, Martina Hutton, Joshua Dorsey, and Elizabeth Minton, “” | Journal of Consumer Psychology
  • Jochen Wirtz, Werner H. Kunz, Nicole Hartley, and James Tarbit, “” | Journal of Service Research

Learn more about the winners and awards:

  • Valuing PhD Scholarship: In partnership with the PhD Project and the Academic Council, the Valuing Diversity PhD Scholarship seeks to widen opportunities for underrepresented populations to attend marketing doctoral programs.​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​
  • Robert J. Lavidge Global Marketing Research Award: The Robert J. Lavidge Global Marketing Research Award recognizes a marketing practitioner or educator who has devised and successfully implemented a research/insight procedure that has practical implications for use by others.
  • Williams-Qualls-Spratlen Multicultural Mentoring Award of Excellence: This award recognizes world class marketing scholars and mentors of people of color while carrying on the legacy of Jerome Williams, Bill Qualls, and Thaddeus Spratlen.
  • -EBSCO-RRBM Annual Award for Responsible Research in Marketing: The purpose of this award is to recognize already-published responsible research in marketing where responsible research is defined as work that produces both useful and credible knowledge. 
  • Charles Coolidge Parlin Marketing Research Award: The Charles Coolidge Parlin Marketing Research Award is the oldest and most distinguished award in the field.  This award is given to leading scholars and practitioners in honor of Charles Coolidge Parlin who is considered the pioneer of marketing research.
  • Thomas C. Kinnear/Journal of Public Policy & Marketing Award: Named after Journal of Public Policy & Marketing’s founding editor, Thomas C. Kinnear, the award honors articles that make the most significant contribution to the understanding of marketing and public policy issues within a three-year time period.  
  • S. Tamer Cavusgil Award: Each year, the Editorial Board of Journal of International Marketing honors the article published in the calendar year that has the most significant contribution to the advancement and practice of international marketing management.
  • Hans B. Thorelli Award: This award honors an article that has made the most significant and long-term contribution to international marketing theory or practice.
  • Paul E. Green Award: This award honors the best Journal of Marketing Research article published within the last calendar year.
  • Weitz-Winer-O’Dell Award: This award recognizes Journal of Marketing Research articles that have made the most significant long-term contribution to marketing theory, methodology, and/or practice.
  • Journal of Interactive Marketing Best Paper Award: This award honors the best Journal of Interactive Marketing article published in a given calendar year.
  • Shelby D. Hunt / Harold H. Maynard Award: Established in 1974 under the leadership of Journal of Marketing editor in chief Edward W. Cundiff, the Harold H. Maynard Award was established to honor the best Journal of Marketing article on marketing theory. The award was renamed the Shelby D. Hunt/Harold H. Maynard Award in 2015 to honor three-time award recipient Shelby D. Hunt, who led the establishment of an endowment to secure the award in perpetuity. This annual award continues to recognize articles published in the Journal of Marketing that make the most significant contributions to marketing theory in a given calendar year.
  • / Marketing Science Institute / H. Paul Root Award: This annual award is given to the Journal of Marketing article that has made the most significant contribution to the advancement of the practice of marketing in a calendar year.  

###

t the ()

As the leading global professional marketing association, the is the essential community for marketers. From students and practitioners to executives and academics, we aim to elevate the profession, deepen knowledge, and make a lasting impact. The is home to five premier scholarly journals including: Journal of Marketing, Journal of Marketing Research, Journal of Public Policy and Marketing, Journal of International Marketing, and Journal of Interactive Marketing. Our industry-leading training events and conferences define future forward practices, while our professional development and PCM® professional certification advance knowledge. With 70 chapters and a presence on 350 college campuses across North America, the fosters a vibrant community of marketers. The association’s philanthropic arm, the ’s Foundation, is inspiring a more diverse industry and ensuring marketing research impacts public good. 


views marketing as the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and society at large. You can learn more about ’s learning programs and certifications, conferences and events, and scholarly journals at .org.

The post and its Foundation Celebrate Marketing Scholars and Professionals Advancing Marketing’s Future appeared first on .

]]>
/press-releases/ama-and-its-foundation-celebrate-marketing-scholars-and-professionals-advancing-marketings-future/feed/ 0 167796
Using Return on Marketing Investment Effectively /marketing-news/using-return-on-marketing-investment-effectively/ Wed, 24 Jul 2024 19:01:10 +0000 /?post_type=ama_marketing_news&p=163885 Attempting to distill the performance of marketing investments into one simple figure can lead to future spending mistakes. A number of JMR studies reveal the right methods for measuring return on marketing investment (ROMI) in different contexts.

The post Using Return on Marketing Investment Effectively appeared first on .

]]>
Marketing executives make many strategic decisions across various spending categories, products, and markets. For example, they may choose to invest in online advertising for product A, launch a price promotion for product B, and engage in a sponsorship for their full brand.

Executives therefore need a metric that assesses and compares the productivity and accountability of their many marketing engagements. Return on marketing investment (ROMI) is the logical metric of choice.

Advertisement

Ideally, ROMI metrics would be single numbers that executives could easily compare across marketing activities. For example, a manager might want to state with confidence, “My search advertising campaign yielded a return of 60%, well above average for ad campaigns for our brand and exceeding last year’s return of 45%.” Importantly, the return calculations would be made on net marketing contribution, found by multiplying revenue increase due to marketing by gross margin, subtracting marketing investment, and dividing the result by marketing investment.

Unfortunately, the reality of marketing does not lend itself well to simple ROMI performance metrics, and executives must understand the metric’s determinants before deploying it across tactics strategically.

Download Article

Get this article as a PDF

ROMI’s Determinants

Consumer response to marketing activities is not linear. Research shows it is typically concave, with diminishing returns to scale, or S-shaped, increasing and then showing the diminishing returns (). As a result, the profit response to marketing spending is typically inverted-U-shaped (). And ROMI depends critically on marketing spending.

In the hypothetical search advertising example, ROMI might be 150% for the first $10,000 spent, 40% for the next $10,000, and negative at higher spending levels. Firms therefore cannot compare ROMI across different marketing campaigns or media, unless they spend the same amount on each.

Academic and practicing marketing analysts have long recognized this challenge. Instead of reporting ROMI in papers, academics focus on top-line productivity metrics, such as sales lift due to marketing, net profit, contribution to overhead, or marginal ROMI (i.e., return on last dollar spent).

To use ROMI correctly, marketers must understand consumer response patterns and the accounting consequences of spending. Researchers have carefully examined marketing spending’s impact on short-term and long-term profitability, customer lifetime value, and other strategically important metrics. But as much as practitioners favor ROMI as a simple yardstick, they must focus on ROMI’s determinants—top-line performance enhancement, profit margins, and marketing costs—then derive the metric on a case-by-case basis. In so doing, they cannot expect a simple return metric that can easily be compared to others ().

ROMI for Marketing Tactics

Most marketing effectiveness studies examine individual actions, particularly advertising. Some focus on marketing’s direct impact on sales to derive the profit and ROMI implications.

With improved intermediate consumer attitudinal data, especially digital metrics like clicks and likes, analysts can derive ROMI in two steps: (1) Estimate marketing’s lift on an intermediate metric (e.g., clicks a digital ad generates) and (2) determine how the intermediate metric translates into future sales (i.e., the conversion rate) (; ). Marketers can make the necessary inferences using historical data and econometric methods, experiments, or a combination of the two ().

In the digital world, marketers have extended their ROMI models to account for the full consumer journey, which allows advertising to reach the right people at the right time (). The analysts make a distinction between first-purchasers (customer acquisition) and repeat-purchasers (customer retention, upselling, and cross-selling), as their ad responsiveness has been shown to differ (). Combining the two effects enables analysts to estimate marketing’s impact on customer lifetime value ().

ROMI for Marketing Strategy

In the context of marketing strategy, which typically combines multiple instruments, ROMI must focus on long-term performance impact, specifically sustained performance growth (). have shown that long-term sales growth is more sensitive to investments in product and distribution than advertising and sales promotions. Indeed, firms cannot expect advertising or sales promotion ROMI results to have a sustained impact.

Marketers can use ROMI to examine how marketing investments enhance critical assets known to improve long-term business performance. have demonstrated that marketing assets are more important in driving firm value than individual marketing actions. They find the meta-analytic firm value elasticity of brand strength is .3 and that of customer relationship strength is .7, while for advertising spending, the elasticity is only .04. use the meta-analytic results to recommend the following strategic marketing allocations: Invest 61% of budget on customer-related assets, 28% on brand-related assets, and 11% on market share.

Research has shown that customer-based assets correlate strongly to customer satisfaction, and customer satisfaction movements can relate to stock price changes (). Published reviews have also been shown to influence customer satisfaction with new products, and have found that review quality elasticity is about .7, while that of advertising is .11, according to .

Summary

Marketing executives and academic analysts have taken significant interest in ROMI. While executives would like to have one number to gauge their marketing investment’s performance, oversimplification can lead to significant future spending errors.

For individual tactics, such as advertising and sales promotions, firms must derive ROMI by measuring marketing’s lift on top-line performance and conducting a marketing cost analysis. Marginal ROMI, found by determining return on last dollar spent, might serve as a unifying metric, being positive for underspending, negative for overspending, and zero for right-spending. But for more strategic marketing decisions, firms should use long-term growth measurement and/or changes in brand or customer relationship assets driving long-term performance to derive ROMI.

Citation

Hanssens, Dominique M. (2024), “Using Return on Marketing Investment Effectively,” Impact at JMR, available at /wp-content/uploads/2024/07/Using-Return-on-Marketing-Investment-Effectively.pdf.

References

Ataman, Berk, Harald J. van Heerde, and Carl F. Mela (2010), “,” Journal of Marketing Research, 47 (5), 866–82.

Danaher, Peter J. and Harald J. van Heerde (2018), “,” Journal of Marketing Research, 55 (5), 667–85.

Deighton, John, Caroline M. Henderson, and Scott Neslin (1994), “,” Journal of Marketing Research, 31 (1), 28–42.

Dekimpe, Marnik G. and Dominique M. Hanssens (1999), “,” Journal of Marketing Research, 36 (4), 1–31.

Dinner, Isaac M., Harald J. van Heerde, and Scott A. Neslin (2014), “,” Journal of Marketing Research, 51 (5), 527–45.

Edeling, Alexander and Marc Fischer (2016), “,” Journal of Marketing Research, 53 (4), 515–34.

Edeling, Alexander, and Alexander Himme (2018), “,” Journal of Marketing, 82 (3), 1–24.

Farris, Paul, Dominique M. Hanssens, James Lenskold, and David Reibstein (2015), “,” Applied Marketing Analytics, 1 (3), 267–82.

Floyd, Kristopher, Ryan Freling, Saad Alhoqail, Hyun Young Cho, and Traci Freling (2014), “,” Journal of Retailing, 90 (2), 217–32.

Fornell, Claes, Forrest Morgeson III, and G. Tomas Hult (2016), “,” Journal of Marketing, 80 (5), 92–107.

Gupta, Sunil, Donald R. Lehmann, and Jennifer Ames Stuart (2004), “,” Journal of Marketing Research, 41 (1), 7–18.

Hanssens, Dominique M., Koen H. Pauwels, Shuba Srinivasan, Marc Vanhuele, and Gokhan Yildirim (2014), “,” Marketing Science, 33 (4), 534–50.

Hanssens, D.M., Leonard J. Parsons, and Randall L. Schultz (2001), , 2nd ed. Kluwer Academic Publishers.

Krishnamurthi, Lakshman, Jack Narayan, and S.P. Raj (1986), “,” Journal of Marketing Research, 23 (4), 337–45.

Mantrala, Murali K., Prasad A. Naik, Shrihari Sridhar, and Esther Thorson (2007), “,” Journal of Marketing, 71 (2), 26–44.

Sethuraman, Raj, Gerard J. Tellis, and Richard A. Briesch (2011), “,” Journal of Marketing Research, 48 (3), 457–71.

The post Using Return on Marketing Investment Effectively appeared first on .

]]>
163885
A Review of Copula Correction Methods to Address Regressor–Error Correlation /marketing-news/a-review-of-copula-correction-methods-to-address-regressor-error-correlation/ Wed, 15 May 2024 15:04:02 +0000 /?post_type=ama_marketing_news&p=156871 Professors Sungho Park and Sachin Gupta draw on decades of research to outline a three-step procedure for researchers interested in applying the copula correction method.

The post A Review of Copula Correction Methods to Address Regressor–Error Correlation appeared first on .

]]>
The omnipresent error term in regression models does not always receive careful attention by model builders. What factors are included in this error? Naturally, it would be ideal if the error were entirely due to random shocks. However, sometimes factors that should be explicitly incorporated in the model but cannot be observed or are unavailable to be used as explanatory variables are also present in the error. Worse, often our accumulated knowledge and theories indicate that the variables seeping into the error term are systematically related to the explanatory variables included in the model. This results in regressor–error correlation, which, if ignored, leads to biased estimates.

Download Article

Get this article as a PDF

Why Does Regressor–Error Correlation Arise?

As an example, investigate the effect of two key visual design decisions: brand typicality (similarity within the brand’s range) and segment typicality (similarity to the competitive set) on consumer purchasing. In the car market, visual appearance is a vital determinant of success; hence, automakers track consumers’ changing tastes and strategically incorporate these into design changes in their newest models. However, because researchers typically cannot observe these changing tastes, they are encompassed in the error term. As a result, empirical models intended to measure the impact of design changes suffer from the problem that the key regressor capturing design changes is correlated with the error. Put differently, the design change regressor is endogenous. If not corrected, the estimated impact of design changes will be biased. As our academic field matures, we continue to discover reasons for regressor–error correlations that were previously overlooked. Examples of such phenomena are (1) advertising endogeneity due to self-selection by consumers in advertising response and (2) pricing endogeneity, because firms and consumers know aspects of product quality that researchers do not see in the data. While additional information such as instrumental variables or exogenous shocks can help address the endogeneity issue, obtaining such information is often challenging. In such situations, the copula correction method provides an alternative approach.

How Does the Copula Correction Work?

The copula correction method directly addresses the issue of regressor–error correlation by assuming a plausible relationship between the endogenous regressor and the error. This additional structure enables the researcher to estimate the model parameters without bias. However, the crucial underlying condition is that the assumed relationship between the endogenous regressor and the error is appropriate. copula correction method (P&G method hereinafter) assumes a general and convenient Gaussian copula–based relationship between the regressor and the error. The various advantages of the Gaussian copula are well known (). The Gaussian copula covers nearly the full (−1, 1) range in pairwise correlation, making it a general and robust copula for most applications. Additionally, its complexity increases at a much slower rate than other multivariate models as the number of dimensions increases.

The P&G method has been extensively used in marketing in diverse contexts such as addressing potential endogeneity of product design changes (e.g., Heitmann et al. 2020), advertising content decisions (e.g., ), and marketing-mix variables (). Since publication of Park and Gupta (2012), various methods that directly model the regressor–error relationship to avoid bias have evolved through subsequent studies. Interestingly, recent developments in this area explore the assumptions of the P&G method and make meaningful improvements by either relaxing them or suggesting alternatives that offer methodological benefits. Accordingly, with the goal of assisting applied researchers interested in employing the copula correction method, in this paper we revisit each assumption of the P&G method and illustrate how new methods enhance them.

Advertisement

The P&G method makes the following assumptions: (1) the endogenous regressor (let’s call it X_en) is nonnormal, (2) the error follows a normal distribution, and (3) the dependence between the endogenous regressor and the error can be captured by a Gaussian copula. The model may include exogenous regressors (let’s call them X_ex) along with the endogenous one. An implicit assumption made in the P&G method is (4) there is no correlation between exogenous and endogenous regressors.

Assumption 1 is easily testable and is often satisfied in many cases (below, we discuss methods that relax this assumption). Assumption 2 is a plausible assumption, commonly used in likelihood-based models or Bayesian models, but it can be violated, and empirical testing can be challenging, especially in situations with regressor–error correlation. Similarly, Assumption 3 is a plausible assumption but it cannot be easily tested (we will also discuss a method that relaxes Assumptions 2 and 3). Fortunately, Assumption 4 is easily testable. If X_ex and X_en are highly correlated, it is necessary to appropriately incorporate this correlation when constructing the model, as bias may arise otherwise. proposes a likelihood-based estimation method for this situation by constructing the joint distribution of the error and all explanatory variables to carry out the estimation. We also note that a number of other recently proposed methods account for the correlation between X_ex and X_en: a nonparametric control function method (), 2sCOPE model (), and SORE model (). Table 1 summarizes all the assumptions of the P&G method, indicates whether they are testable, and suggests methods to consider in case the assumption is violated or to enhance robustness.

Table 1. Assumptions of the P&G Method and Recent Developments

Assumption of the P&G MethodTestable?Methods to Consider if the Assumption Is Violated or to Enhance Robustness
1The endogenous regressor is nonnormalYes• Yang, Qian, and Xie (2023)—2sCOPE model
2The error follows a normal distributionNo• Breitung, Mayer, and Wied (2024)—nonparametric control function method  
3The dependence between the endogenous regressor and the error can be captured by a Gaussian copulaNo• Qian and Xie (2023)—SORE model
• Breitung, Mayer, and Wied (2024)—nonparametric control function method    
4There is no correlation between exogenous and endogenous regressorsYes• Haschka (2022)
• Breitung, Mayer, and Wied (2024)—nonparametric control function method
• Yang, Qian, and Xie (2023)—2sCOPE model
• Qian and Xie (2023)—SORE model

Park and Gupta (2012) demonstrate that the copula correction method can be applied to discrete choice models as well. The crucial first step in applying the copula correction method is appropriately deriving the linear form of regressor–error dependence. For instance, in the analysis of aggregate sales data, prominent models such as BLP () include linear regressor–error dependence between price and common shocks (i.e., price endogeneity). Once we obtain this linear form of regressor–error dependence, applying a copula correction method can address estimation issues.

Recent Developments in Copula Correction Methods

Table 2 summarizes the key strengths of recently proposed methods. The data used in empirical marketing analyses often have a panel structure. When panel data sets have numerous cross-sectional units and relatively few time periods per unit, the challenges of estimation are addressed through fixed-effect transformation. Haschka (2022) extends Park and Gupta’s (2012) approach to panel data where fixed-effect transformation is necessary. The concern when applying fixed-effect transformation is the presence of nonspherical errors. After resolving the problem of nonspherical errors through a generalized least squares transformation, Haschka develops a copula correction method based on the joint distribution of the error and all explanatory variables.

Table 2. Key Strengths of Recently Proposed Methods

MethodStrength
Haschka (2022)Provides fixed-effect transformation to handle data with numerous cross-sectional units but relatively few time periods per unit
Breitung, Mayer, and Wied (2024)—nonparametric control function methodProvides a robustness check of the P&G method in cases where researchers cannot justify that (1) the error follows a normal distribution, and/or (2) the dependence between the endogenous regressor and the error follows the Gaussian copula (e.g., previous studies may argue that the error deviates from normality)
Yang, Qian, and Xie (2023)—2sCOPE modelAllows for the application of the copula correction method when X_en follows a normal distribution, X_en and X_ex are correlated, and X_ex deviates from normality
Qian and Xie (2023)—SORE modelHandles discrete endogenous regressors with only a few levels, such as binary regressors or count-valued regressors with small means

The copula correction method obtains unbiased estimates of model parameters by modeling the relationship between the regressor and the error. Of course, the true relationship between the two is unknown. The P&G method provides a plausible starting point, and adding other options is naturally beneficial for empirical research. By considering models based on alternative relationships between regressors and errors, researchers can conduct more robust analyses.

In the P&G method, the assumed regressor–error correlation based on Gaussian copula allows us to decompose the error into (a) the part correlated with the endogenous regressor and (b) pure exogenous shocks that are unrelated with all the regressors. Part (a) is expressed as a nonlinear function of the endogenous regressor, and this part plays a role very similar to a control function (for an overview of control functions, see, e.g., and ). Breitung, Mayer, and Wied (2024) propose a novel “nonparametric control function method.” In this approach, the control function that constitutes Part (a) follows a normal distribution, and Part (b) is a mean-zero shock that does not necessarily have to be normal. Consequently, Assumption 2 of the P&G method is relaxed. Similar to the P&G method, which assumes nonnormality of the endogenous regressor for model identification, the Breitung, Mayer, and Wied model requires that specific assumptions related to the distribution of the endogenous regressor be satisfied. While this approach originates from the idea of the copula correction approach, it has the advantage of not assuming a specific copula. Furthermore, Breitung, Mayer, and Wied formally demonstrate the consistency, asymptotic normality, and validity of bootstrap standard errors for the model parameters.

We turn next to Assumption 1, which is that the endogenous regressor has a nonnormal distribution. The recently proposed “two-stage copula endogeneity correction” (2sCOPE) method relaxes this requirement (Qian and Xie 2023). Additionally, like Haschka (2022), 2sCOPE assumes that the endogenous regressors, exogenous regressors, and errors are interrelated through a Gaussian copula. For estimation it employs a two-stage approach using control functions derived from the assumed model. An advantage of the method is that it allows for consistent parameter estimation even if the endogenous regressor follows a normal distribution, as long as one of the correlated exogenous regressors deviates from normality.

As noted, the essence of the copula correction approach lies in directly modeling the correlation between regressors and errors to estimate model parameters without bias. The semiparametric odds ratio (SOR) has often been used in applied research in marketing and related fields as a flexible method to capture dependence between variables (see, e.g., ; ). The semiparametric odds ratio endogeneity (SORE) model has recently been proposed as a method that utilizes SOR to capture regressor–error dependence (Qian and Xie 2023). One notable advantage of SOR is its ability to handle the association between discrete endogenous regressors and the error effectively. While the P&G method can be applied to discrete endogenous regressors, it does not handle endogenous regressors with only a few levels well; examples are binary regressors or count-valued regressors with small means. This limitation arises because the P&G method treats discrete endogenous regressors as realizations from underlying continuous latent variables and performs an inverse mapping from the cumulative distribution functions of endogenous regressors to the latent variables. The SORE model addresses this issue. However, this benefit comes at a cost: SORE constructs a conditional distribution from the odds ratio (OR) function and nonparametric baseline distribution functions. If the OR function is misspecified, it can lead to bias and/or issues of model nonidentification.

One of the primary reasons researchers may choose to use SORE is its ability to handle binary endogenous regressors. A more classical solution in such cases is to employ a Gaussian copula–based approach with a structure similar to the models proposed by or . These models assume a specific relationship between the binary endogenous regressor and the error based on Gaussian copula. In this scenario, researchers can estimate the model without bias using conditional likelihood instead of the reverse mapping proposed in Park and Gupta (2012).

The robustness of the P&G method has been stress-tested by multiple subsequent studies. In Park and Gupta (2012), the copula correction method’s performance was demonstrated in a simple setting without an intercept. show that the performance of copula correction in a more general setting when an intercept is included is diminished when the sample size is small. However, find that the substantial bias identified in Becker, Proksch, and Ringle is primarily due to their method of constructing the empirical copula. Specifically, the correction term for the empirical copula, which is based on a fixed-value percentile for the highest rank, can significantly distort the distribution of the copula correction terms, resulting in suboptimal performance of the copula correction method. When the P&G method is applied more precisely, as suggested by Qian et al., the bias in the coefficient estimate of the endogenous regressor becomes negligible when the sample size reaches 400, rather than 4,000. Becker, Proksch, and Ringle also carefully examine the nonnormality assumption and how this assumption affects the results. In a similar vein, investigate the performance of the P&G method when various assumptions are violated, especially in cases of near-normal endogenous regressors, nonnormal and skewed errors, and the regressor–error correlation based on non–Gaussian copulas and provide guidelines for such scenarios. Like all models, copula correction methods rely on assumptions and naturally their use requires significant caution, especially when the sample size is small. Fortunately, the series of recent papers that have extended the original P&G method address many of these situations. More specifically, the issue of nonnormality can be mitigated in the 2sCOPE method. Problems related to skewed or nonnormal errors can be addressed through the nonparametric control function method. Moreover, an advantage of both SORE and the nonparametric control function methods is their flexibility to consider relationships between regressors and errors that do not necessarily follow a Gaussian copula.

Guidance for the Applied Researcher

To wrap up, we suggest the following three-step procedure for researchers interested in applying the copula correction method.

  1. Check whether the endogenous regressor follows a nonnormal distribution. If it is near normal, researchers can try the 2sCOPE model. If the endogenous variable is discrete and has only a few levels, such as binary regressors or count-valued regressors with small means, one can apply the SORE model. If the endogenous regressor follows a nonnormal distribution, proceed to Step 2.

  2. Check for correlations between X_en and X_ex. If the correlations are large, apply the 2sCOPE model.[1] If the data set has a panel structure and requires fixed-effect transformation to handle numerous cross-sectional units and relatively few time periods, apply the method proposed by Haschka et al. (2022). If there is low correlation between X_en and X_ex, apply the P&G method.

  3. As a robustness check, consider running the nonparametric control function method if the endogenous regressor is continuous. Unfortunately, Assumptions 2 and 3 of the P&G method are not easily testable. The nonparametric control function method does not require the normality of the error (Assumption 2) or assume a specific copula structure between the endogenous regressor and the error (Assumption 3). However, it does require an alternative set of assumptions, and some of these assumptions are also difficult to test using data. We suggest the nonparametric control function method as a robustness check because, like the P&G method, it is relatively easy to apply. Finding consistent results between the P&G method and the nonparametric control function method provides greater assurance of validity.

FAQ

Additionally, we provide below answers to some frequently asked questions regarding the use of the copula correction method in practice.

Q1: Is it correct to use multiple copula correction terms for multiple endogenous variables in the same model?

Answer: This is correct. One advantage of the copula correction method based on the Gaussian copula is that it can include multiple copula correction terms to handle multiple endogenous regressors.

Q2: In estimating a model with higher-order terms (e.g., interaction and quadratic terms) of the endogenous variable, should we generate additional copula correction terms for them?

Answer: addresses this issue formally. They show that once copula correction terms for the main effects of endogenous regressors are included as generated regressors, there is no need to include additional correction terms for the interaction terms or higher-order terms. This simplicity in handling higher-order endogenous regression terms is a merit of the copula correction approach. More importantly, adding these unnecessary correction terms has harmful effects and leads to suboptimal solutions of endogeneity bias.

Q3: Is it acceptable to exclude nonsignificant copula correction terms from the final model when it involves multiple copula correction terms?

Answer: This issue is similar to a common challenge encountered in statistical analysis for which the final answer is not clear-cut: Should you exclude or include nonsignificant regressors when building the final model? Considering factors such as model complexity, influence on other variables, theoretical implications, and model fit, researchers may choose to drop nonsignificant regressors or leave them in the model. If the copula correction term is not significant, removing nonsignificant regressors in the final model can have positive effects in terms of model simplicity, degrees of freedom, and multicollinearity. We suggest examining how sensitive the estimates of key variables are when removing nonsignificant copula correction terms. If the effects of key variables are not very sensitive, removal may be harmless.

Q4: Is it acceptable to utilize the significance of the copula correction term as an indicator to determine whether endogeneity is a concern?

Answer: If the P&G assumptions are correct, the nonsignificance of the copula correction term implies that there is no endogeneity caused by the regressor–error correlation. While the assumptions of P&G can serve as a plausible starting point, one cannot conclusively determine the absence of endogeneity based on this result alone. Therefore, it is advisable to consider other methods as a robustness check (e.g., nonparametric control function method, 2sCOPE).

Conclusion

The copula correction method has extended beyond marketing and is increasingly being introduced and widely used in various fields, including management, economics, and psychology. Open-source code is also becoming widely available to implement the method (). Concurrently, there has been substantial additional research on the assumptions and weaknesses of the original P&G model, leading to its development and evolution. As we know, there is no free lunch. To be able to conduct analysis without instrumental variables or additional information, copula correction methods must make assumptions about the relationship between regressors and errors. Through further research, we need to understand the relationship between regressors and errors better, both theoretically and empirically, and leverage this additional knowledge to develop a copula correction model that captures the regressor–error correlation more completely.

Footnote

[1] Determining a precise threshold for “high” correlation is difficult and requires further research. Please refer to the last row of Table 1. We know that the bias resulting from the ignored correlation between X_en and X_ex depends on (1) the correlation between X_en and the error, (2) the correlation between X_en and X_ex, and (3) the variance of the error. If minimal regressor–error correlation is expected (based on previous results and/or theory) and the explained part in the variation of the dependent variable is large (i.e., the explanatory power of the model is high and thus the error variance is small), we can expect that the impact of the correlation between X_en and X_ex is minimal. See the appendices of Haschka (2022) and Yang, Qian, and Xie (2023). Moreover, if X_ex is highly correlated with X_en, we need to meticulously double-check the exogeneity of X_ex. Finding a suitable instrumental variable is challenging because it must be correlated with the endogenous regressor yet uncorrelated with the error term. Similarly, it is unlikely that a variable is truly exogenous if it is highly correlated with the endogenous variable.

Citation

Park, Sungho, and Sachin Gupta (2024), “A Review of Copula Correction Methods to Address Regressor–Error Correlation,”&Բ;Impact at JMR. Available at: /marketing-news/a-review-of-copula-correction-methods-to-address-regressorerror-correlation/.

Acknowledgment

We would like to express our gratitude to Kapil Tuli and Rebecca Hamilton for their valuable feedback and numerous helpful suggestions during the review process, which have contributed to enhancing the utility of this article.

References

Becker, Jan-Michael, Dorian Proksch, and Christian M. Ringle (2022), “,” Journal of the Academy of Marketing Science, 50, 46–66.

Berry, Steven, James Levinsohn, and Ariel Pakes (1995), “,” Econometrica, 63 (4), 841–90.

Breitung Jörg, Alexander Mayer, and Dominik Wied (2024), “,” Econometrics Journal (published online January 24), .

Chen, Hua Yun (2007), “,” Biometrics, 63 (2), 413–21.

Danaher, Peter J. and Michael S. Smith (2011), “,” Marketing Science, 30 (1), 4–21.

Datta, Hannes, Harald J. van Heerde, Marnik G. Dekimpe, and Jan-Benedict E.M. Steenkamp (2022), “,”&Բ;Journal of Marketing Research, 59 (2), 251–70.

Eckert, Christine and Jan Hohberger (2023), “,” Journal of Management, 49 (4), 1460–95.

Gui, Raluca, Markus Meierer, Patrik Schilter, and René Algesheimer (2023), “,”&Բ;Journal of Statistical Software, 107 (3), 1–43.

Guitart, Ivan A. and Stefan Stremersch (2021), “,” Journal of Marketing Research, 58 (2), 299–320.

Haschka, Rouven E. (2022), “,” Journal of Marketing Research, 59 (4), 860–81.

Heckman, James J. (1976), “,” Annals of Economic and Social Measurement, 5 (4), 475–92.

Heitmann, Mark, Jan R. Landwehr, Thomas F. Schreiner, and Harald J. Van Heerde (2020), “,” Journal of Marketing Research, 57 (2), 257–77.

Lee, Lung-Fei (1983), “,” Econometrica, 51 (2), 507–12.

Navarro, Salvador (2010), “,” in Microeconometrics, Steven N. Durlauf and Lawrence E. Blume, eds. The New Palgrave Economics Collection. Palgrave Macmillan, 2-–28.

Park, Sungho and Sachin Gupta (2012), “,” Marketing Science, 31 (4), 567–86.

Qian, Yi and Hui Xie (2011), “,” Marketing Science, 30 (4), 717–36.

Qian, Yi and Hui Xie (2023), “,” Journal of Marketing Research, (published online August 3), .

Qian, Yi, Hui Xie, and Anthony Koschmann (2022), “,” NBER Working Paper 29978, .

Qian, Yi, Hui Xie, and Anthony Koschmann (2024), “,” NBER Working Paper 32231, .

Wooldridge, Jeffrey M. (2015), “,” Journal of Human Resources, 50 (2), 420–45.

Yang, Fan, Yi Qian, and Hui Xie (2023), “,” NBER Working Paper 29708, .

The post A Review of Copula Correction Methods to Address Regressor–Error Correlation appeared first on .

]]>
156871
Wies, Bleier, and Edeling Win the 2023 /Marketing Science Institute/H. Paul Root Award /press-releases/wies-bleier-and-edeling-win-the-2023-ama-marketing-science-institute-h-paul-root-award/ /press-releases/wies-bleier-and-edeling-win-the-2023-ama-marketing-science-institute-h-paul-root-award/#respond Wed, 10 Apr 2024 16:57:02 +0000 /?post_type=ama_press_releases&p=153482 The annual /Marketing Science Institute/H. Paul Root award is given to the Journal of Marketing article that has made the most significant contribution to the advancement of the practice of marketing in a calendar year. It is cosponsored by the and the Marketing Science Institute, and it honors past Board Chair […]

The post Wies, Bleier, and Edeling Win the 2023 /Marketing Science Institute/H. Paul Root Award appeared first on .

]]>
Simone Wies
Goethe University Frankfurt
Alexander Bleier
Frankfurt School of Finance & Management
Alexander Edeling
KU Leuven

The annual /Marketing Science Institute/H. Paul Root award is given to the Journal of Marketing article that has made the most significant contribution to the advancement of the practice of marketing in a calendar year. It is cosponsored by the and the Marketing Science Institute, and it honors past Board Chair H. Paul Root, who also served as president of MSI from 1990 to 1998. The winners of the 2023 /MSI/H. Paul Root Award are Simone Wies, Alexander Bleier, and Alexander Edeling for their article “” (Volume 87, Issue 3).

The selection committee, composed of Detelina Marinova (University of Missouri), Jacob Goldenberg (Reichman University), and Jie Zhang (University of Maryland) noted:

This paper leveraged industry collaboration and proprietary data from several influencer agencies, complemented with eye tracking, lab studies and simulations, to offer novel insights that can guide practice on influencer marketing. The authors show that while a rise in influencer follower count drives consumer engagement, this engagement subsequently decreases unless it is mitigated by higher content customization and lower familiarity of the sponsored brand.

This work has significant real-world impact as corporate spending on influencers will double in the next three years, and Statista estimates the size of the global influencer market as 24 billion in 2024, a 40.3% CAGR since 2016. The paper is in the top 5% of the 24,187,594 research outputs ever tracked by Altmetric.

A quick, informative summary of this research is available here.

The article will be honored at the 2024 Summer Academic Conference, August 16-18 in Boston, Massachusetts. Previous recipients of this award can be found here.

Advertisement

The other excellent finalists for this award are:

  • “,” by Aaron M. Garvey, TaeWoo Kim, Adam Duhachek
  • “,” by Dennis Herhausen, Lauren Grewal, Krista Hill Cummings, Anne L. Roggeveen, Francisco Villarroel Ordenes, and Dhruv Grewal
  • “y,” by Kristopher O. Keller, and Jonne Y. Guyt
  • “,” by Simone Wies, Christine Moorman, and Rajesh K. Chandy

Go to the Journal of Marketing

The post Wies, Bleier, and Edeling Win the 2023 /Marketing Science Institute/H. Paul Root Award appeared first on .

]]>
/press-releases/wies-bleier-and-edeling-win-the-2023-ama-marketing-science-institute-h-paul-root-award/feed/ 0 153482
Why Marketing Research Needs to Diversify Its Focus [Expert Insights] /2023/10/24/marketing-research-is-too-narrow-how-the-field-must-change-to-keep-producing-relevant-timely-knowledge/ Tue, 24 Oct 2023 10:02:00 +0000 /?p=138045 A new Journal of Marketing study examines decades of marketing research to reveal a number of ways that scholars can better address the challenges marketers currently face.

The post Why Marketing Research Needs to Diversify Its Focus [Expert Insights] appeared first on .

]]>

Does research in marketing fail to make meaningful theoretical advancements? Recent analyses have examined the lack of theoretical advancements from various angles, including fragmentation of knowledge, lack of practical impact, tendency for excessive complexity, and the missed opportunity for homegrown theories. These studies shed light on the issue but have limitations that prevent them from fully diagnosing the problem.

In a , our research team provides a differentiated analysis of how specific types of knowledge contributions have developed over the past 32 years. Our results both support and question the overall trend of marketing research becoming less disruptive.

Our team conducted computer-aided text analyses of published research articles from the four major marketing journals (Journal of Marketing, Journal of Marketing Research, Journal of Consumer Research, and Marketing Science) to trace the development of different types of knowledge contributions. We find that marketing researchers have focused more and more on identifying new phenomena and explaining relatively well-defined problems. At the same time, there has been less focus on building “big-picture” frameworks and theories and launching critical debates. As a result, marketing academia may find it challenging to provide answers to complex, practical marketing problems.

Advertisement

To better understand the reasons underlying such trends, we conducted a large interview study with 48 thought leaders in marketing, including journal editors, department heads, and authors. On the basis of these interviews, we find that the identified patterns can be traced back to how marketing scholars tend to think about “ideal” research. Anything that cannot be pitched as completely “new,” isn’t 100% conceptually clear, and defies easy quantification will often be brushed aside. Therefore, our findings suggest that marketing does not lack novel ideas but rather limits its focus to exploring specific types of ideas. The field could do more to ascertain how such novel ideas challenge or disrupt previous knowledge.

Our findings suggest that marketing does not lack novel ideas but rather limits its focus to exploring specific types of ideas.

Next Steps for Broadening Marketing Research

What can be done to counter these developments and help scholars provide better answers to the challenges marketing practitioners currently face?

  1. We propose that doctoral training programs be redesigned. For instance, doctoral courses might need to put more emphasis on transmitting the logical, conceptual, and theoretical skills required to engage in critical debate.
  2. Changes in editorial policies can also be a lever to support the development of research that focuses more strongly on bigger pictures. Special issues dedicated to the promotion of these types of knowledge contributions can be a valuable step forward. In general, we call on marketing scholars to engage in and use experience from a wider range of academic and nonacademic fields.
  3. In view of the multidisciplinary character of marketing problems, scholars could also invest more heavily into building collaborations with researchers from neighboring fields. Such collaborations might start at the formation stage when doctoral students from marketing are trained with students from other fields. Another option for collaboration resides in jointly conducting and publishing research.
  4. We also propose that closer interactions between marketing scholars and practitioners, consumer activists, and policy makers provide a promising path to reshaping marketing research. Such interactions can help scholars better appreciate the complexity of practical marketing problems and gear their research approaches accordingly. Specifically, scholars and practitioners can work jointly on research projects or start constructive debates at marketing conferences. Also, practitioners might take more active roles as mentors of aspiring marketing researchers.

A Need for Joint Efforts

Our research offers important implications for the marketing field:

  1. Our documentation of the temporal development of marketing scholarship over the past 32 years indicates that the field does not suffer from an overall lack of theorizing efforts. Instead, our analysis suggests that the field has shifted toward certain types of contributions and that this shift has influenced the general development of marketing knowledge.
  2. Our findings reveal that the tendency to focus on some types of contributions over others affects citation impact. Those articles that typically spark the most citations are the ones that have experienced the steepest decline, suggesting that marketing scholars may be missing an opportunity to achieve higher impact with their work.
  3. Our research suggests that marketing research’s current challenges can only be solved through a joint effort that includes marketing scholars, practitioners, consumer activists, and policy makers involved in marketing. The better we get at rebalancing knowledge creation and emphasizing “big-picture” frameworks and critical debate, the more valuable the results of marketing research will be.
  4. We encourage practitioners, consumer activists, and policy makers to keep an open mind toward collaborating with universities and other research institutes. Of particular value would be collaborations that span a longer period of time and therefore allow the people involved to engage in an in-depth exchange of ideas. While such collaborations will require investments on both sides, the payoff will be worth it—both in monetary and nonmonetary terms.

Read the Full Study for Complete Details

From: Bastian Kindermann, Daniel Wentzel, David Antons, and Torsten-Oliver Salge, “,” Journal of Marketing.

Go to the Journal of Marketing

The post Why Marketing Research Needs to Diversify Its Focus [Expert Insights] appeared first on .

]]>
138045
Correcting Flaws in Measuring Willingness to Pay [Expert Strategy] /2023/10/03/how-marketers-measure-willingness-to-pay-is-flawed-now-theres-a-better-way/ Tue, 03 Oct 2023 10:02:00 +0000 /?p=136748 A new Journal of Marketing study provides an improved methodology for determining willingness to pay by taking context and comparisons into account.

The post Correcting Flaws in Measuring Willingness to Pay [Expert Strategy] appeared first on .

]]>

At the grocery store, a customer may be willing to pay $18 for a bottle of Riesling when comparing it to a $15 bottle of Chardonnay. However, if that customer learns that the Chardonnay is on sale for $12, they may not be willing to pay $18 for the Riesling. Another customer may only be willing to pay $14 for the Riesling after comparing it to the alternative of not buying anything at all (i.e., keeping their money).

Whether selling consumer packaged goods, durable goods, or services, marketers have always confronted a critical question: What will a customer pay for the market offering? If a marketer charges too little relative to what customers are willing to pay, they risk missing out on profits that could otherwise have been earned. And if a marketer charges too much, an otherwise excellent product or service may fail to generate sufficient demand in the market. Because understanding how much customers are willing to pay for a product or service carries immense practical implications, marketers have sought measurement and analytical tools to capture customers’ willingness to pay (WTP)—a metric that helps them understand the maximum price they can charge for a product or service.

In a , we reveal limitations in existing methods of measuring WTP and caution that these methods can provide vague and/or inaccurate results. For example, the open-ended question often asked in surveys or focus groups (“How much are you willing to pay for X?”) makes no mention, nor offers the respondent any guidance, as to the relevant comparisons or the relevant context. Another popular method, choice-based conjoint analysis, presents possible comparisons but does not capture what the most relevant comparison is for a respondent.

Advertisement

Comparative Method of Valuation

We introduce a new methodology—Comparative Method of Valuation (CMV)—that integrates comparison and/or context and produces greater accuracy and insight. CMV can be thought of as a generalized and enhanced version of the classic methodology.

Context can affect WTP by changing how a customer values a product relative to a comparison or by changing what the relevant comparison is altogether. For example, WTP for a new car model may vary depending on whether the customer is upgrading to this model (comparison: old model), switching from a different model (comparison: other model), or buying a car for the first time (comparison: no car). This means that a valid WTP methodology must be able to not only capture a comparison but also different potential comparisons. However, existing methods often taken an agnostic stance on this matter.

While most researchers would likely agree that WTP can vary with the situation, our study reveals how situational factors can affect WTP via two distinct comparative mechanisms.

  • The situation can directly influence a customer’s valuation relative to a given comparative option. For example, consider a beachside vendor selling two brands of beer—Corona and Miller Lite—and some nonalcoholic beverages. The customer wants an alcoholic drink and their preferred option among the alternatives is Miller Lite (priced at $5). However, if the customer has an enjoyment goal, they may value Corona more than Miller Lite, and their WTP for Corona would be more than $5. But if they have a diet goal, they may value Corona less than Miller Lite and their WTP for Corona would be less than $5.
  • The situation can indirectly impact WTP through a change in the comparative option. Taking the previous example, if the customer moves from the beach to the hotel bar, Miller Lite is priced at $8 a bottle but their preferred option may now be a $20 cocktail. In this case, the customer’s WTP for Corona would be determined in comparison to the cocktail instead of Miller Lite. Thus, the situation affects WTP via the indirect pathway; that is, through a change in the comparative option.

Without capturing the specific comparison relevant in a given situation, existing methods inherently contain substantial ambiguity as to what is being measured. Moreover, existing methods cannot delineate the two distinct pathways through which a situational factor may affect WTP. By contrast, CMV offers more precise measurement of WTP and is able to capture the direct and indirect mechanisms through which situational factors affect WTP.

Our procedure allows marketers to move from attempting to measure WTP without comparisons and context to measuring WTP in a manner that integrates these critical factors. As a result, we offer guidance as to how marketers can improve their measurement of WTP and obtain more insight about customers’ WTP. Moreover, our studies also demonstrate how to apply CMV to solve common managerial problems. We show how CMV can be applied to price a premium version of a product relative to a basic version and how to use CMV to evaluate whether more or less of an attribute (e.g., warranty) should be offered.

Read the Full Study for the Detailed CMV Process

From: Sharlene He, Eric T. Anderson, and Derek D. Rucker, “,” Journal of Marketing.

Go to the Journal of Marketing

The post Correcting Flaws in Measuring Willingness to Pay [Expert Strategy] appeared first on .

]]>
136748
Tear Down This Data Wall /marketing-news/tear-down-this-data-wall/ Fri, 02 Jun 2023 17:30:32 +0000 /?post_type=ama_marketing_news&p=125179 A new book in The Seven Big Problems series makes the case for democratizing data to succeed in digital transformations. Last fall, federal rules took effect that required health care organizations to give patients unfettered access to their full health records — troves of information gathered and stored digitally. That meant patients could better understand […]

The post Tear Down This Data Wall appeared first on .

]]>
A new book in The Seven Big Problems series makes the case for democratizing data to succeed in digital transformations.

Last fall, federal rules took effect that required health care organizations to give patients unfettered access to their full health records — troves of information gathered and stored digitally. That meant patients could better understand their care, shop for services, and find their own opportunities to participate in research.

Advertisement

Barriers to better care, costs and research were knocked down when patient information was democratized. It was a major victory for proponents of data sharing, and it’s likely to spur innovation in the sector. It suggests that others’ digital transformations could benefit from collaborating through shared information — further breaking down silos within organizations.

At a time when only 11% of global chief marketing officers believe that they have completed their digital transformation, per a MediaSense survey, there’s a good reason to believe that many organizations need to consider a new framework for implementing change. To tackle the challenge, professors Zeynep Aksehirli, Koen Pauwels, Yakov Bart and Kwong Chan wrote Break the Wall: Why and How to Democratize Digital in Your Business. It’s the third book in the series The Seven Problems of Marketing, edited by Bernie Jaworski.

The authors have more than a century of combined experience around the topic. They’re a multidisciplinary team of business professors in marketing and organizational behavior at Northeastern University; they’ve consulted with numerous companies; they’ve published research on metrics, big data, social media, mobile shopping behavior and more; they’ve written books on the methods and implementation of digital transformation; and they’ve interacted with managers, analysts and data scientists across four continents. 

All this experience pointed to two central questions: How do companies “break the wall” in digital transformation? What does it mean to democratize digital data and insights and embed this learning into a company?

“It’s time to look at it more holistically to see how it fits in with the rest of your organization,” Pauwels says. “Study after study says a lot of these transformation projects start with very lofty, very ambitious goals and they fail to meet their objectives. It’s really something that management and marketing and all the departments have to work together to do to reap the benefits.”

Chapter 1 of the book begins with just such an illustrating story: The CEO of an unnamed company complains that his organization spent millions to hire the top data scientists and gather all their data — but he couldn’t see the impact on the business. The problem, the authors identified, was that the scientists and managers weren’t interacting; there was no close cooperation between business units and sharing of insights. 

Yet after these groups began engaging with one another more closely, and top management began integrating the digital transformation into company strategy — after the process became democratized — only then could results be realized.

Vision and strategy

Before an organization can even begin to undertake a digital transformation, managers need to outline what they hope to accomplish and how to get there. When it comes to the vision, the authors emphasize in the book that “digital transformation is an organizational transformation that changes how an organization employs digital technologies.”

“When people are looking at digital transformation, if they focus just on the digital side and lose track of the transformation, it tends to be hollow and not reach its goals,” Aksehirli says. “The idea of transformation in general has been very well researched, well tested. Instead of just focusing on the digital side, if you can shift the focus to the idea of changing the organization, transforming each person’s responsibilities and perspectives, it goes much easier and much better for the organization.”

And with all that’s known about organizational transformation, it’s worth remembering that it will take each company a different amount of time to complete the work. (“In my work, I always say I plan for a certain amount of time and then I predict it will take double that time,” Pauwels says. “For any transformation project, that’s a good rule of thumb.”)

Understanding that the time frame varies in the same way that culture varies across companies allows your vision to become unique to your organization. Copying another company doesn’t make the vision specific to what your employees and customers need, nor will they feel loyal to the goals or compelled by the transformation.

“I always say I plan for a certain amount of time and then I predict it will take double that time. For any transformation project, that’s a good rule of thumb.”

Koen Pauwels

Why democratize?

When the focus is on the entire organization transforming, then every employee needs to play a role. According to Aksehirli, there are two primary reasons a company would want to democratize the process. First, it ensures better decisions are made when it comes to data because they have access to and training around the information.

“Most of the organizations we deal with are aware of data being collected at various departments,” she says. “They’re trying to store it, use it to the best of their abilities. But the differentiator for more successful organizations is that they can use that data to make better decisions — and decisions are not things that we do quarterly at a big scale. It’s every employee every day, making small decisions to make the organization better or fulfill its mission.” 

Allowing any employee to access any part of the data collected by other departments, and knowing how to handle that data, is going to help them make better decisions, she explains.

Second, the open access to data naturally builds trust. It’s not that everybody in the organization needs to know all the data — a customer service representative doesn’t necessarily need to know daily sales data — but roadblocking access is an indication of distrust.

All that said, it’s important that companies take a starting temperature before jumping into the democratization. Should some departments already be working together to share data and collaborate, it can come off as uninformed if upper management mandates something that’s been underway.

And before you do take down barriers to data access, take a beat to understand why those barriers existed in the first place. “Often [there is a] sort of mindless, ‘Okay, let’s try to collect as much data as possible without any concern for consumer privacy, let’s try to share as much data as possible without setting protocols.’… Well, often the barriers that were there before the digital transformation started were there for a reason,” Bart says.

The framework

The book offers a set of guiding principles for digital transformations, which takes inspiration from the biological concept of nested adaptive cycles: a model that tries to understand resilience in the natural world by looking at continuous and nesting adaptation cycles of species. 

  1. Initiation of change: Leaders must make the business objectives of a digital transformation clear throughout the organization, explaining why the change is happening and how it benefits everyone.
  2. Implementation of change: The interaction between changes at different levels of the organization drives the true success of digital transformation, thus overcoming the gap between the current and desired status of the transformation.
  3. Building resilience: This is the human element — individual and communal — that makes the change stick. It covers recruitment and training, along with how company culture influences how digital transformation can be approached, but also how the transformation will influence the culture.
  4. Reconsideration and renewal: The vision for digital transformation only materializes when the entire organization comes together to institutionalize the changes.

The chapters describe each of the principles, and include stories and examples to illustrate how to successfully implement the framework.

The big takeaways

Each of the authors can identify what they think are some of the key takeaways from their research, experience and subsequent book. But the overarching theme comes down to awareness: of barriers, of roles, and of differences.

The authors urge companies to learn to understand the difference between barriers to adoption for technological reasons versus human reasons: Is the slow uptake on the democratization of data a problem of access to information, software or hardware? or is it a matter of resistance among more senior employees or those brought in from acquisitions?

“Sometimes we confuse one for the other,” Chan says. “Just talking to people can help us delineate the real reasons for a lack of change.”

Understanding the role that individuals play in the transformation, and how it aligns with the vision, is an integral part of ensuring everyone contributes. But also understanding that the speed at which each person and department moves is crucial to avoid resentment or panic: Those working in smaller groups and who touch the outside world may recognize problems and solutions much faster than others, so firms should acknowledge that different gears move at different speeds — and everyone may have a different perspective that must be disseminated to the rest of the organization.

“It doesn’t necessarily mean that the speed of digital transformation is determined by the slowest wheel, the slowest team member,” Bart says. “The ‘move fast and break things’ [mentality] pioneered by [Facebook founder Mark] Zuckerberg no longer applies to digital transformation, because we’ve seen too many organizations that are burning fast after moving fast. Yes, it’s important to start looking at how your company must go through digital transformation as soon as possible, but this action bias that we often see perpetuated from the top when it comes to digital transformation — sometimes it really can backfire, when it is done without setting proper goals for all members of the organization and setting proper speeds.” 

But the authors wouldn’t be practicing what they preach if they, themselves, didn’t democratize the data: the book provides the text from interviews with practitioners about their own organizations’ digital transformations (“Recognizing how curious minds love deriving their own conclusions from raw data, we present these conversations here,” they write in the book.) This way, the reader may draw their own, applicable conclusions — and sharing this knowledge with their organization could prompt others to impart their own experiences.

“These (lessons) are told by way of stories and interviews, and it could be a way to get your own organization to share information about the ways they use technology and have digitized their own work — which could be the best way to make the case,” Pauwels says. “That will often resonate far better than just, ‘this is better, faster, cheaper.’ What’s the story?”

The post Tear Down This Data Wall appeared first on .

]]>
125179