Online Reviews Archives /topics/online-reviews/ The Essential Community for Marketers Thu, 18 Dec 2025 17:32:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2019/04/cropped-android-chrome-256x256.png?fit=32%2C32 Online Reviews Archives /topics/online-reviews/ 32 32 158097978 The Psychology of Feedback Design: How the Same Ratings Look Better (or Worse) Depending on Format /2025/12/11/the-psychology-of-feedback-design-how-the-same-ratings-look-better-or-worse-depending-on-format/ Thu, 11 Dec 2025 17:18:38 +0000 /?p=215538 A Journal of Marketing Research study shows that the presentation of performance scores—whether as cumulative averages, individual (incremental) scores, or a combination—can significantly influence how people evaluate products, services, or individuals.

The post The Psychology of Feedback Design: How the Same Ratings Look Better (or Worse) Depending on Format appeared first on .

]]>
Journal of Marketing Research Scholarly Insights are produced in partnership with the – a shared interest network for Marketing PhD students across the world.

In an era where ratings and reviews shape consumer behavior and business reputation, the format in which performance scores are presented can dramatically alter how they are perceived. Firms like Uber, Amazon, and TripAdvisor present scores in a variety of formats: incremental (a raw score per occurrence), cumulative (updated average scores), or a combination thereof. A examines the impact of incremental scores versus cumulative averages on judgments and why these matter for managers, platform designers, and policymakers.

It demonstrates that the presentation of performance scores—whether as cumulative averages, individual (incremental) scores, or a combination—can significantly influence how people evaluate products, services, or individuals. The authors find that when a generally well-performing entity receives a negative score, people view it as less damaging when the information is presented in a cumulative format. This presentation reduces negativity bias and helps prevent overreactions such as customer churn. However, incremental formats make single bad scores stand out more strongly, which could be helpful in contexts where managers want to stress accountability or encourage improvement.

Advertisement

The implications are far-reaching. For example, a restaurant with fluctuating quality may benefit from incremental formats that highlight recent improvements, while a ride-sharing app might prefer cumulative scores to maintain a stable reputation. The study also reveals that when both formats are presented together, users tend to focus more on the most extreme score—especially if it is negative—suggesting that hybrid formats may not provide the balance designers expect.

Managers can use these insights to tailor score presentation formats to different user segments. Novices may benefit from incremental feedback that encourages progress, while experts prefer cumulative scores that reflect long-term performance. The authors also suggest that dynamically switching formats could help platforms manage user expectations and behavior, though this approach may introduce confusion if not carefully designed.

Ultimately, this research highlights a subtle yet powerful lever for influencing consumer judgment. By rethinking how scores are presented, organizations can more effectively manage perceptions, foster trust, and achieve desired outcomes.

Key Takeaway

Across nine experiments, the authors find that cumulative formats tend to buffer negative feedback, making poor scores appear less severe. This can help reduce customer churn and maintain trust in platforms. In contrast, incremental formats make each score stand out, amplifying the impact of a single negative rating. This can be useful in contexts where accountability and improvement are key.

We had a chance to connect with one of the authors to learn more about their study and gain additional insights:

Q: Your research examines how the presentation format of quantitative scores affects decision makers’ evaluations. Did you have any observations that sparked your interest in reviewing the phenomenon closely and studying its consequences?

A: Yes, my two coauthors, Arne and Jeroen, are typically incredibly observant of the marketplace. Also, for this project, I actually learned it from them. They noticed that platforms like Uber, at the time, were using cumulative rating formats, while another app, Lyft, was using incremental formats. Building on that observation, we also looked at discussions on Reddit to see what people were saying about these differences. These different formats might have effects, and it became a nice combination of marketplace observation and psychological inquiry. We began asking: what kind of things vary in the marketplace? Do companies differ in their approaches, and could that have an impact?

If different companies are using various formats, there may be a reason behind it. Sometimes, companies haven’t thought it through, but in other cases, especially with tech companies, they have very deliberate reasons for their choices. From a psychological perspective, that makes it especially interesting to delve deeper. That was the starting point of this research.

Q: The research indicates that the presentation format has a significant impact on decision making when scores deviate. How do you suspect these findings would hold (or differ) in the context where performance expectations are less standardized and tend to be more subjective, such as creative services?

A: That is a good question. We haven’t thoroughly examined subjective domains, and several factors may be at play here. One thing to consider is that when it comes to highly subjective matters, people sometimes have strong preexisting preferences. For example, if I like the paintings of a particular artist, I will still appreciate them regardless of the score. In such cases, when people have strong preferences, ratings don’t matter much, and so the rating format likely won’t matter either.

On the other hand, in situations where people don’t have firm preexisting opinions, such as wine tasting, ratings can serve as a crucial cue. Many people lack in-depth expertise (strong preexisting opinions) in wine, so they tend to rely more on ratings (e.g., on an app like Vivino), whereas experts tend to depend less on them. Therefore, it can go either way, depending on how strong people’s preexisting preferences are.

It may also depend on the decision environment. When buying online without direct access to the product, ratings become more influential. If we do have direct access or if rich visual information is available, heavy reliance on ratings decreases. However, on many online platforms, ratings are among the primary pieces of information that influence purchase decisions.

Q: Among the many interesting findings in your study, were there any results that surprised you? If so, could you share which aspects stood out to you the most?

A: Yes, two findings were astonishing. Based on initial observations, one might have predicted that the combined format—which shows both cumulative and incremental ratings—would produce evaluations somewhere in between the two. For instance, if people see a negative score alongside a positive overall average, one might expect them to weigh both pieces of information and arrive at a more moderate judgment. However, consistent with theory on sensitivity to extremes, we found that evaluations in the combined format aligned entirely with the incremental presentation. When they encounter a negative score, it is challenging to ignore, and it strongly pulls down their overall evaluation.

Equally surprising was the strength of this effect. The effect sizes were much larger than we anticipated. To illustrate, in one of our studies, we asked participants to consider a product with an average rating of 4.2 based on five scores, four of which were maximum ratings. When asked to infer the missing score, many participants still significantly overestimated it. Instead of recognizing that the missing score must have been 1, participants often assumed it was a 2 or 3. In other words, the overall average of 4.2 seemed to “pull” their inference upward, making the extreme negative observation less salient than it genuinely was. People may systematically misestimate underlying scores, even in cases where the math is simple.

Q: Did you observe or imagine any unintended downsides to using cumulative formats, for instance, situations where critical problems might be masked rather than addressed? Can managers detect or avoid sweeping serious negative feedback under the rug in an average-based system?

A: Yes, this is a very real concern. One situation we examined was how cumulative formats can obscure recent performance issues, particularly in contexts such as app evaluations. For example, a local TV station had an app that initially received decent ratings. However, after a significant update, the app’s performance declined. Despite this, the cumulative score remained relatively high, masking the recent problems. In such cases, managers need to look beyond the overall average and examine incremental scores to understand what’s happening in the present.

This issue is not limited to apps. Service contexts such as restaurants, for instance, can vary significantly in quality over time. A place might have been excellent in the past but could be struggling now. If customers only see the cumulative score, they might miss these recent dips in quality. On the other hand, some services are inherently variable, experiencing random hiccups that aren’t sustained. In those cases, a cumulative score might be more representative of the overall experience.

Therefore, yes, cumulative formats can mask critical problems, and managers should exercise caution. They need to monitor recent feedback and not rely solely on averages, and they should actively do so. Otherwise, they risk overlooking serious issues that could impact customer satisfaction and retention.

Q: Can you envision a system where different user groups (novice/experts, high-value/low-value customers) would benefit from tailored score presentation formats? How might platforms segment their audience or dynamically switch formats to maximize desired behavioral outcomes?

A: Absolutely. The format of score presentation should align with the platform’s goals and the characteristics of its users. For example, if the goal is to encourage users to get started or continue engaging with a service, incremental formats can be more motivating. Imagine a course where the scores are 2, 3, 3, and then a sudden 5. Seeing the incremental progress might encourage someone to keep going. In contrast, a cumulative score might make the journey seem steep or discouraging, despite a recent maximum score.

There may also be the psychological impact of losing a perfect score. For instance, if someone has a cumulative score of 5 and then receives a 4, they may feel as though they’ve lost something valuable. This is a real issue: Some people react strongly to losing a perfect rating, as seen in platforms like Uber. Although we haven’t directly tested these scenarios, they are interesting and relevant.

Different users respond differently to feedback. Some are encouraged by seeing improvement, while others might be discouraged by a dip in their average. Platforms could segment users based on their behavior or preferences and present scores in a format that best supports their engagement. This kind of dynamic tailoring could be a powerful tool for influencing user behavior and satisfaction.

Q: If you could extend this research in any direction, which new context or type of platform would you benefit most from experimenting with score presentation formats, and why?

A: A promising direction would be to explore the dynamic switching of formats, where platforms change how scores are presented based on user behavior or context. For example, if a user receives a series of high scores (say, five 5s) and then gets a 1, the platform might switch to a cumulative format to soften the impact. However, if the user improves again, it may revert to an incremental format. However, this kind of switching can be confusing. Users may not understand why the format changed or what it means for their performance.

Cumulative formats are challenging for users to interpret. They require users to understand that they need to improve to increase their score consistently. This can feel like a slow climb, especially after a setback. The interplay between shifting formats, user expectations, and the pursuit of a perfect score creates a complex psychological landscape.

We’ve only studied one or two sequences that could realistically occur in the real world, but there’s a lot more to explore. Platforms like ride-sharing apps, educational tools, and fitness trackers could benefit from experimenting with different formats. Understanding how users respond to these changes can help platforms design more effective feedback systems that support motivation, satisfaction, and long-term engagement.

Read the Full Study for Complete Details

Source: Christophe Lembregts, Jeroen Schepers, and Arne De Keyser (2023), “,” Journal of Marketing Research, 61 (5), 937–54. doi:.

Go to the Journal of Marketing Research

The post The Psychology of Feedback Design: How the Same Ratings Look Better (or Worse) Depending on Format appeared first on .

]]>
215538
The Power of Verified Reviews in Shaping Buying Decisions and Building Brand Trust /marketing-news/the-power-of-verified-reviews-in-shaping-buying-decisions-and-building-brand-trust/ Mon, 25 Aug 2025 20:30:23 +0000 /?post_type=ama_marketing_news&p=203969 Product reviews are a vital touchpoint in shaping consumer purchasing decisions, but also in shaping business and product development decisions as well as brand perceptions. For businesses, reviews are an invaluable source of feedback, offering insights that can improve products, enhance customer service, bolster their brand, and ultimately drive more sales. For consumers, reviews can […]

The post The Power of Verified Reviews in Shaping Buying Decisions and Building Brand Trust appeared first on .

]]>
Product reviews are a vital touchpoint in shaping consumer purchasing decisions, but also in shaping business and product development decisions as well as brand perceptions. For businesses, reviews are an invaluable source of feedback, offering insights that can improve products, enhance customer service, bolster their brand, and ultimately drive more sales. For consumers, reviews can provide an authentic look at products from real users, empowering them to make informed decisions before hitting “buy.”

Our recent study at highlights just how important product reviews—especially verified purchaser reviews—are in building consumer confidence, particularly for high-ticket items like electronics and small appliances. This growing trend reinforces the broader shift toward transparency and authenticity in today’s shopping experience, with consumers increasingly turning to trusted sources. So, where are they looking for these reviews?

Advertisement

Seeking the Right Reviews

Insights from our study reveal that 68% of consumers turn to Amazon for product reviews, followed by popular platforms like social media (50%), YouTube (48%), and brand websites (47%). These platforms have become trusted sources of information, with consumers increasingly relying on user-generated content to inform their purchasing decisions. The study also finds that two-thirds (66%) of consumers feel confident in their purchase with just 100 reviews available, emphasizing that it’s not the quantity but the quality of reviews that truly matters to shoppers.

However, a recent article from highlights a growing issue: While reviews are essential for informed decision-making, an overwhelming number of them—especially when there is an excess of paid or incentivized reviews—can actually create confusion. This can make it difficult for shoppers to distinguish between what’s truly valuable and what’s not. As a result, it’s crucial for brands to provide concise, relevant, and verified reviews that effectively guide consumers without overwhelming them.

Reviews Are a Key Factor for Purchase Decisions

Product reviews are most definitely a key factor in. According to, a reputation performance management consultancy, product reviews have become one of the most powerful drivers of consumer decision-making, surpassing traditional influences like company marketing, influencer opinions, and even recommendations from friends and family. This growing reliance on reviews places them firmly at the top of the decision-making pyramid. Whether online or in-store, reviews are often the first thing consumers check—with positive feedback fostering trust and negative comments serving as a red flag. In brick-and-mortar stores, shoppers are increasingly turning to their smartphones to read reviews, often scanning QR codes or visiting retailer and third-party websites to inform their purchases. This highlights how seamlessly reviews have become integrated into the entire shopping journey, no matter what they are shopping for at the moment.

Top Areas Where Reviews Are Consulted

A recent analysis from reveals that consumer reviews have the greatest influence on high-involvement products, such as electronics and appliances, where authenticity and reliability from verified purchasers are highly valued. For these products, reviews play a critical role in decision-making. And reviews have a slightly lesser impact on low-involvement or lower-ticket items, where consumers are more likely to prioritize factors like brand reputation or price.

In fact, found that 54% of consumers consider reviews essential when purchasing electronics, while 51% rely heavily on reviews for small appliances. These categories, being high-value purchases, naturally carry more weight in the research process, highlighting the importance of trusted feedback. Reviews still influence sectors such as beauty and personal care (41% and 36%, respectively) or packaged goods (25%), but their role is somewhat less decisive compared to high-ticket items. But how those reviews are delivered matters regardless of the product category.

Verified Purchaser Reviews

There is among consumers about the authenticity of online reviews. As information becomes more accessible, distinguishing between genuine and fake reviews has become increasingly challenging, and consumers are skeptical. With this growing difficulty, it is not surprising that our consumer study found a strong preference for verified purchaser reviews. These reviews—submitted by real, verified buyers—carry far more weight than incentivized or promotional reviews, and are easier to determine whether they contain authentic feedback. Consumers trust them more, and these reviews are integral in building both brand trust and confidence in the product’s effectiveness.

Verified purchaser reviews are increasingly seen as the gold standard, offering consumers the assurance that they are receiving unbiased, real-world insights into a product’s performance and quality.

  • Brands that can provide verified purchaser reviews are not just building a repository of feedback—they are laying the groundwork for trust and transparency. Studies show that trust is the most important factor in a consumer’s decision to make a purchase, with saying they would avoid a brand that they perceive as untrustworthy.
  • Verified purchaser reviews give shoppers the confidence they need, particularly when products come with a significant financial commitment.

Reviews Integral to Brand and Product Success

Product reviews have evolved from a simple feature to a central component of the consumer shopping experience. As product reviews continue to strongly inform purchasing behavior, brands must ensure their review systems are transparent and authentic. The shift toward verified purchaser feedback is not just a preference—it’s a necessity for brands that want to succeed in a market driven by informed and discerning consumers.

The post The Power of Verified Reviews in Shaping Buying Decisions and Building Brand Trust appeared first on .

]]>
203969
Increasing Review Helpfulness: Do Photos Complement or Substitute for Text? /2024/05/09/increasing-review-helpfulness-do-photos-complement-or-substitute-for-text/ Thu, 09 May 2024 15:13:39 +0000 /?p=156356 Are reviews with photos more helpful? If so, do consumers find reviews more helpful when photos and text convey similar or different information? A Journal of Marketing Research study explores.

The post Increasing Review Helpfulness: Do Photos Complement or Substitute for Text? appeared first on .

]]>
Journal of Marketing Research Scholarly Insights are produced in partnership with the – a shared interest network for Marketing PhD students across the world.

A found that an overwhelming majority (93%) of Americans often read customer reviews and ratings when buying a product or service for the first time. As customers increasingly rely on reviews to make decisions, it becomes essential to identify the characteristics of reviews that make them helpful. Do the reviewer’s credibility and expertise matter? Does writing style have an effect? Or can something as simple as adding photos to a review make them more helpful?

In a , Gizem Ceylan, Kristin Diehl, and Davide Proserpio explore when and why photos increase review helpfulness. The authors combine a machine-learning analysis of review text and photos from Yelp.com with five experiments to evaluate whether and why the similarity between the text and photos in reviews makes them more helpful.

Advertisement

How Text and Photos Combine to Improve Review Helpfulness

The authors show that, whereas the mere presence of a photo can increase a review’s helpfulness, greater similarity between photos and text heightens this effect. Using a dataset of 7.4 million reviews associated with 3.5 million photos from Yelp.com, the authors provide real-world evidence of a positive association between photo–text similarity and helpfulness. This study also provides preliminary evidence that the positive effect of text–photo similarity on review helpfulness is attenuated when processing ease is low (i.e., text is difficult to read, image quality is low).

Through controlled lab experiments, this research delves deeper into the underlying mechanism and finds that perceived ease of processing drives the effect of text–photo similarity on review helpfulness. It also establishes that text and photo fluency act as moderators such that similarity enhances helpfulness when the text and photo are difficult to process (i.e., less fluent to the reader).

These findings have important implications for marketers. They demonstrate that the interplay between visual and verbal content influences review helpfulness. By shedding light on the mechanism behind the effect of text–photo similarity on review helpfulness, this research also provides insights into how review sites can increase review helpfulness by nudging consumers to convey similar content in text and photos, rather than using photos to substitute for text.

Review sites can increase review helpfulness by nudging consumers to convey similar content in text and photos, rather than using photos to substitute for text.

We were able to ask several questions to these authors, who provided interesting insights into this article:

Q: You find that similarity between photos and text helps in improving helpfulness perceptions in the context of Yelp reviews. Do you think the impact of similarity would be the same across all platforms and all contexts, or is there any reason to expect a deviation?

A few factors come to mind that could potentially moderate the impact of photo–text similarity on perceptions of helpfulness across different platforms and contexts:

  • Centrality of visuals versus text – What I mean by that is that for certain categories (e.g., clothes) and certain platforms (e.g., Instagram), visual content is more central in the review experience, whereas in other categories (e.g., podcasts) and on other platforms (e.g., Reddit), text is more central. We would expect similarity to matter more in settings where visuals are more central to the experience versus those where text is more central.
  • Experience variability – What I mean by that is the consistency in photos between different users and on different occasions. For durable goods or even experiential purchases such as hotel rooms, where photos remain more or less the same across users and usage occasions, aligned photos may not play as critical of a role as is the case for restaurants.
  • Devices – As we show, similarity helps with fluency. This may be particularly important on mobile devices, because the smaller screen size may create greater feelings of difficulty (vs. laptops) for which similarity-induced fluency may overcome these feelings.

Q: Given your findings that the similarity between text and photos heightens the helpfulness of the review, do you believe that the ratio of photos to text also plays a role in influencing helpfulness? For instance, is there a difference in impact between scenarios with less text but more photos compared to those with more text and fewer photos?

Great question! The simple answer is yes, but probably not the way you would expect. We conducted an experiment to examine how the number of topics in the review text and the number of photos included in the review impact the helpfulness of online reviews. We tested conditions with either one or two topics mentioned in the text, crossed with either one or two photos shown. The results demonstrated that when there was one topic in the text and one matching photo, the review was moderately helpful. That was our baseline. Simply adding a second photo to that review without adding an additional topic did not increase the helpfulness. However, adding a second topic without adding an additional matching photo increased helpfulness. Finally, reviews that included two topics in the text matched with two photos produced the highest helpfulness rating.

These findings suggest two key conclusions. First, people seem to focus relatively more on the text of reviews to obtain useful information compared to the photos. Simply adding more topics led to higher perceptions of informativeness, while adding more photos did not. Second, alignment between the number of topics covered in the text and the number of illustrative photos is important—the greater the match, the easier the review is to process, making it more informative and helpful overall.

Q: What are some challenges associated with multimethod research?

This research project proved to be a valuable learning experience for our team. The reviewers’ feedback challenged us to strengthen the connection between our large-scale data modeling and the experiments. In particular, they emphasized integrating the insights from the Yelp data more tightly into the experimental studies. In response, we worked hard on making the transition from the computational modeling in Study 1 to the experimental settings more seamless by using actual Yelp reviews as stimuli in Studies 2 and 3. Improving the connection between these different components was critical in the review process. As multimethod work is becoming more common to address external and internal validity concerns in the same paper, connecting different data sources and approaches is critical.

Q: More generally, it makes intuitive sense to think online content anywhere would be more helpful if it’s presented with both visual and verbal information. Is there a specific reason why you focus on reviews?

You raise an excellent point: the interaction between text and images we identified likely extends beyond online reviews into many communication contexts. We focused specifically on reviews in this paper for pragmatic reasons, given their importance in influencing consumer decisions and the ready availability of review data to study. However, the core finding that similarity between textual topics and corresponding images improves ease of processing and perceived informativeness has clear implications more broadly.

For example, in science communication, public health messaging, or education, ensuring topic–image congruence could enhance comprehension and engagement. When communicating about a vaccine, matching the text to accompanying visuals should boost understanding by facilitating cognitive processing. Overall, this text–image complementarity effect appears generalizable and can inform effective communication design across many domains, not just reviews. Examining this phenomenon in other settings is an exciting direction for future research that can build on the foundations here.

Q: Do you think the use of different media (smartphones vs. personal computers) could impact how important photos or text are? For example, individuals may be more likely to focus on photos in a less-attention context (smartphones) and more on text in a high-attention context.

For what we find (i.e., that greater photo–text similarity creates feelings of fluency and thus increases helpfulness), high- versus low-attention contexts could be a moderator. When readers don’t devote a lot of attention, the cognitive ease that greater photo–text similarity provides should have a bigger effect on helpfulness. Similarly, another important moderator could be the reader’s motivation to process the information (either chronically or situationally). When motivation is lower, the facilitating effect of greater photo–text similarity should be more impactful. On the other hand, when readers are highly motivated to process, the facilitating effect of greater photo–text similarity may be less.

Read the Full Study for Complete Details

Read the full article:

Gizem Ceylan, Kristin Diehl, and Davide Proserpio (2023), “,” Journal of Marketing Research, 61 (1), 5–26. doi:0.1177/00222437231169711

Go to the Journal of Marketing Research

The post Increasing Review Helpfulness: Do Photos Complement or Substitute for Text? appeared first on .

]]>
156356
A Secret for Boosting Hotel Bookings: Analyze Online User Reviews for Both Your Hotel and Your Competitors /2023/09/05/a-secret-for-boosting-hotel-bookings-analyze-online-user-reviews-for-both-your-hotel-and-your-competitors/ Tue, 05 Sep 2023 05:02:00 +0000 /?p=133860 The online reviews that a hotel receives—as well as those that its competitors receive—have a significant effect on its bookings.

The post A Secret for Boosting Hotel Bookings: Analyze Online User Reviews for Both Your Hotel and Your Competitors appeared first on .

]]>

Recent reports indicate that a majority of consumers trust online reviews as much as personal recommendations when deciding to book a hotel. A —a popular social media platform for hotel reviews—found that 81% of users “usually” or “always” refer to user reviews before they make a booking. Among those surveyed, 52% said that they would never book a hotel that had no reviews.

Understanding how a hotel’s user reviews compare against the competition can help it make better pricing decisions without dropping occupancy numbers and affecting revenue. However, it is not clear how the effect of competitor reviews on a hotel’s demand compares to the effect of the hotel’s own reviews. It is also hard to ascertain when competitor reviews have a larger impact on demand for the hotel than the hotel’s own reviews.

The Importance of Competitors’ Reviews

In a , we investigate the impact of online reviews on hotel booking performance with a specific focus on the competitive effects of reviews. Our research team examines review-based competitive effects, utilizing actual hotel booking data at an individual booking level. Analyzing proprietary data from six branded upscale hotels in six major U.S. cities across three years—along with numerical ratings and review text from TripAdvisor—we investigate the effects of competitor hotel prices and a hotel’s own prices.

Our results indicate that both the online reviews that a hotel receives as well as those that its competitors receive have a significant effect on its bookings. If a hotel’s own sentiment score on a five-point scale were to improve by 1%, it can realize a .38% increase in its bookings at average price. At the same time, an improvement in its competitors’ sentiment score by 1% could decrease its bookings by .25%. In addition, for a hotel that charges high prices, an improvement in its own sentiment score by 1% results in a .54% increase in its bookings, while an improvement in its competitors’ sentiment score by 1% results in a .34% decrease in its bookings.

Advertisement

Lessons for Hoteliers

Our study offers several insights for hoteliers and Chief Marketing Officers:

  1. Both prices and online reviews of a hotel can significantly affect consumers’ booking decisions and the number of bookings a hotel realizes.
  2. Apart from a hotel’s own review scores, its competitors’ scores can impact bookings—and this impact is even higher if the volume of reviews is high. Thus, hotel managers must evaluate and improve their scores relative to the competition.
  3. Different consumer segments show different booking behaviors depending on the interplay between prices and reviews. For instance, leisure travelers are more influenced by a hotel’s prices, whereas business travelers are more sensitive to online reviews and are primarily concerned with comfort and quality. Hotel managers must develop (or adjust) their strategies according to the needs of nonhomogeneous consumer populations.
  4. Reviews in which consumers need to rely on the experiences of others to assess the quality of the experience in a hotel have more impact than a search attribute such as location, where information can be obtained directly from a hotel’s website or some other external source. The impact of review sentiment by consumer segment and by review content can help hotel managers understand where to direct their efforts in responding to reviews and addressing consumer concerns.

Additionally, if the hotel’s prices are perceived as high, they may see a larger drop in bookings as a result of more negative reviews or its competitors’ more positive reviews. Hotel managers must evaluate their review scores in conjunction with their prices and incorporate review scores into their marketing strategies. Although most hotel chains employ sophisticated dynamic pricing, we are seeing some initial movement toward incorporating nonprice data into pricing decisions. Our study provides empirical evidence to support the significance of the interplay between reviews and prices, and it shows that this might change by consumer segment and review content.

Our findings can provide hoteliers with new insights on the effect of online reviews and review sentiment on hotel demand and serve as a valuable resource when incorporating the effects of their own and their competitors’ reviews into pricing strategies and other marketing activities.

Read the Full Study for Complete Details

From: Sanghoon Cho, Pelin Pekgun, Ramkumar Janakiraman, and Jian Wang, “,” Journal of Marketing.

Go to the Journal of Marketing

The post A Secret for Boosting Hotel Bookings: Analyze Online User Reviews for Both Your Hotel and Your Competitors appeared first on .

]]>
133860
The Effects of Profanity in Online Reviews: A Provocative Study /2023/04/26/a-fcking-interesting-marketing-study-the-impact-of-profanity-in-online-reviews/ Wed, 26 Apr 2023 16:42:57 +0000 /?p=121375 The authors of a recent Journal of Marketing Research study discovered that consumers often perceive product reviews with swear words to be more helpful.

The post The Effects of Profanity in Online Reviews: A Provocative Study appeared first on .

]]>
Journal of Marketing Research Scholarly Insights are produced in partnership with the – a shared interest network for Marketing PhD students across the world.

Warning: This article contains strong language that some readers may consider offensive.

Have you ever seen swear words in product reviews? How did they make you feel? Although profanity is typically thought to be offensive language that breaks social norms in marketing contexts, the authors of a discovered that consumers often perceive product reviews with swear words to be more helpful.

This phenomenon occurs because consumers perceive two meanings from swear words:

  1. Strong speakers’ feelings
  2. Strong product attributes

Consumers assume that when the speakers use swear words, they are expressing strong feelings about the product attributes because they are willing to take a risk and break social taboos to express their feelings. Swear words also convey meanings about the product and intensify the product attributes. For instance, in the sentence “this dishwasher is damn quiet,” the profanity describes the subject and indicates a strong product attribute.

Profanity Style Matters

Despite the benefits of profanity, marketers should pay attention to the number of swear words and the way in which they’re used. When there are too many swear words in a review, it actually lessens the strength of the product attribute being discussed because consumers they think the reviewer may be exaggerating or simply prefers to express strong feelings. Additionally, swear words may be presented in different speech styles: uncensored (e.g., fuck), euphemistic (e.g., frick), and censored (e.g., f***). Euphemistic swear words have a similar impact to uncensored swear words. However, censored (vs. uncensored) swear words convey weaker product attitudes and weaker reviewer feelings because the censored swear words sound different and show the suppression of openness.

So, in many situations, profanity can cause reviews to be perceived as more helpful and can lead to an increase in positive attitudes toward the product. However, the offensive nature of swear words might be too much for some. Another limitation is that swear words might not be beneficial in contexts with inherently strong attributes or emotions (e.g., “Skydiving is damn fun!”) because the swear word’s meanings may not offer diagnostic information. Overall, the findings suggest that website moderators may be wise to avoid banning swear words, as they can increase the value of reviews and readers’ attitudes toward the reviewed products.

Advertisement

The findings suggest that website moderators may be wise to avoid banning swear words, as they can increase the value of reviews and readers’ attitudes toward the reviewed products.

Because of the topic’s prevalence and relevance to multiple contexts, we contacted the authors for further insight into this research.

Q: Swear words are common in our daily life, and most of us experience their effects unconsciously. What inspired you to focus on the effect of swear words?

A: It all started with a conversation among colleagues about the appropriateness of swear words in certain contexts (e.g., a university setting). I started to think about how swear words must do more than just cause offense because they are used so frequently. Swear words conveyed something else, but, at the time, it was not clear what they added to the conversation.

I focused on swearing in word of mouth because I was particularly interested in how consumers use swear words to convey something. What did consumers learn from reading a swear word in a product review? Did the swear word add anything?

Q: This research is about the online word of mouth (WOM) context. Do you think the effect can be extended to other contexts, such as firm advertisements/posts or in-person communications?

A: I think there are many ways that swear words can influence consumers. In our WOM context, we saw that swear words conveyed meaning about the product and the reviewer. However, our model probably would not hold in the context of advertisements. Advertising is typically exaggerated and emotionally intensified, so consumers may not draw the same meaning from swear words. Instead, I would expect that swear words in advertisements exert effects through other pathways (e.g., arousal, shock, attention, interpersonal closeness). It would be important to continue testing these other explanations in new contexts.

Q: This research identified the speaker meaning and the topic meaning as parallel mediators. This dual meaning of a single word is a very novel point. Do you think it’s happening at the conscious level or unconscious level? Are consumers or WOM readers aware of the different meanings of the speaker and topic from the effect of swear words?

A: Great question! In one sense, our model is a cognitive one because of the word of mouth context. Consumers read reviews with the purpose of making inferences about the product and the reviewer. However, they may still be unaware that a single word in a review can change those inferences. Still, multiple swear words in a review may provoke more conscious processing about word choice because it draws so much attention to the reviewer (e.g., Why did they swear so much? Are they prone to strong feelings? Are they exaggerating?).

Q: With swear words often considered taboo, did you encounter any challenges in pursuing this topic?

A: Yes, there were challenges. First, I struggled to justify this idea as the topic of my dissertation. Swear words was a fun topic to discuss with faculty and other PhD students but everyone had different predictions on what would happen. I could not write a proposal without evidence because there wasn’t even enough literature to support my initial predictions of a positive effect.

Second, while the faculty was extremely supportive of this research, we faced some backlash in the wider academic community. For example, some worried that swear words could be used to incite violence, so it would be unethical to work on any research on the topic. The topic alone seemed to polarize conference reviewers, which made it trickier to get our paper accepted to conferences for feedback. Still, this extra scrutiny helped us figure out how to position the paper for wider audiences.

The third challenge is that it was sometimes difficult to discuss the project via email. If the email contained a swear word (e.g., describing the conditions of the independent variable), the email would sometimes get flagged by the email provider and go straight to the junk folder. This was especially the case if I asked for feedback from someone outside my organization. We constantly had to check our junk folder to make sure we didn’t miss an important email.

Q: What were the most surprising findings of this research?

A: That a single word can communicate two points of information simultaneously. I now consider the possibility of both product and reviewer pathways in all my research on word of mouth. I was also surprised that the positive effect of swear words showed up in so many different product categories.

Q: Norms and languages often have significant differences across cultures – do you think this research could be extended in that direction?

A: Yes, I think there are many ways to consider culture in this model. First, people from some cultures may not draw the same meaning from swear words used as degree adverbs. They could make different inferences about the reviewer, particularly if they knew the demographics of the reviewer (e.g., gender, age). Second, there should be differences across cultures in terms of which words are taboo and therefore which swear words influence readers. Third, some cultures may use swear words more than others, which may change the magnitude of the effect in one way or another. I’m sure there’s many more opportunities than the ones I listed here.

Read the Full Study for Complete Details

Read the full article:

Katherine C. Lafreniere, Sarah G. Moore, and Robert J. Fisher (2022), “,” Journal of Marketing Research, 59 (5), 908–25. doi:

Go to the Journal of Marketing Research

The post The Effects of Profanity in Online Reviews: A Provocative Study appeared first on .

]]>
121375
How Online Reviews Influence Doctor Selection [Quality Insights] /2023/01/17/finding-the-right-doctor-can-online-rating-platforms-direct-patients-to-higher-quality-physicians/ Tue, 17 Jan 2023 05:02:00 +0000 /?p=113486 Do online physician ratings of matter? This Journal of Marketing study says rating platforms provide patients with quality information and direct them to higher-quality physicians.

The post How Online Reviews Influence Doctor Selection [Quality Insights] appeared first on .

]]>

With the spread of technology and increased availability of information, patients increasingly rely on user-generated online ratings when choosing physicians and making other healthcare decisions. A recent survey shows that almost three-quarters of patients rely on online reviews as the first step to finding a new doctor. However, consumers typically lack the specialized knowledge required to evaluate the quality of service, and it is unclear whether online ratings signal physician quality information and affect patients’ physician choices.

In a , we find that online physician rating platforms can help disseminate important quality information to patients and direct them to higher-quality physicians. Our research addresses two questions:

  1. Are online ratings correlated with physician quality?
  2. Do online ratings affect patients’ physician choices—and if so—what are the underlying mechanisms through which ratings affect patients’ physician choices?

Despite the popularity of consumer-generated online physician ratings, their effectiveness and reliability are unclear. The American Medical Association () has raised concerns that user-generated physician ratings may lack useful information and that the ratings may not reflect actual patient treatment outcomes. On the other hand, online ratings can be a valuable resource: patients may be able to infer physicians’ clinical quality by observing their own health conditions or by directly assessing physicians’ empathy, attentiveness, and communication skills.

Higher Ratings = Better Quality Care

To examine the impact of user-generated online ratings on healthcare choices, we combine physician rating data from Yelp.com with data from Medicare, which cover a large elderly patient group. For those consumers who base their physician choice decisions on online ratings, our findings that physicians with higher ratings have higher clinical quality indicate that patients will be matched with higher-quality physicians. We find that physicians with higher ratings have better educational and professional credentials measured by board certification status, ranks of schools, and accreditations. Furthermore, physicians with higher ratings show higher adherence to clinical guidelines, and patients of physicians with higher ratings display better clinical outcomes. The findings indicate that online reviews are highly correlated with important measures of clinical quality and provide important quality signals to patients. We also examine the effects of ratings on patient flow, measured by physician’s revenue and patient volume, and we find that an increase in a physician’s average rating has positive effects on patient flow and increases the physician’s annual patient revenue and volume.

We use a machine learning algorithm to determine what information is included in online physician reviews. We find that reviews contain signals about physicians’ service-related quality (e.g., a physician’s bedside manner, waiting time, and office amenities) and clinical and treatment-related quality (e.g., treatment, diagnosis, prescription, and outcomes). When choosing a physician, we find that patients respond most to information on physicians’ interpersonal and clinical skills.

Advertisement

Further, we find that patients’ responses to online ratings are greater for physicians with more reviews. The finding is consistent with signaling theory, which indicates that ratings would signal more information about a physician’s quality when there are a greater number of reviews. The effects of ratings on patient flow are larger for physicians with more younger patients who have greater access to online rating information. Also, the positive effect of positive online ratings on patient flow is greater for solo practitioners who may lack institutional backing. For self-employed physicians who are not associated with large hospitals and brand names, good online ratings can provide extra information, help signal quality, and reduce patient uncertainty.

Implications for Patients and Health Professionals

Our findings have important implications for policymakers, healthcare managers, physicians, patients, and online physician rating platforms. We find that online ratings are robustly and positively associated with conventional measurements of physicians’ credentials, physicians’ adherence to clinical guidelines, and patients’ clinical outcomes. Our finding that user-generated physician ratings are positively associated with important measures of physician quality highlights that online physician reviews can be a reliable and user-friendly source of information.

It is also important for policymakers, physicians, and online rating platforms to create mechanisms that encourage the accumulation of physician ratings to improve the information quality and reliability of rating platforms. Physicians who wish to improve their patient flow should be mindful of online reputation management; we find that reviews about physicians’ interpersonal and clinical skills have significant effects on patients’ physician choices.

.

From: Yiwei Chen and Stephanie Lee, “,” Journal of Marketing.

Go to the

The post How Online Reviews Influence Doctor Selection [Quality Insights] appeared first on .

]]>
113486
When to Ask Customers for Feedback [Best Practices] /2023/01/10/asking-for-customer-reviews-at-the-right-time-sooner-is-not-always-better/ Tue, 10 Jan 2023 05:02:00 +0000 /?p=113123 A new Journal of Marketing study finds that many companies may want to reevaluate the timing of their review reminders.

The post When to Ask Customers for Feedback [Best Practices] appeared first on .

]]>

Popular websites such as TripAdvisor, Hotels.com, and Booking.com send notifications to customers immediately following checkout, requesting reviews about their recent experience and other feedback. Many firms send automated emails or mobile push notices after a purchase to learn about customers’ recent experiences with the product. This raises the important question: When should companies send out review requests?

In a , we examine how the timing of review reminders affects the likelihood and quality of product review postings. Issuing review reminders immediately or shortly after purchase of a product or vacation experience may threaten a consumer’s freedom and prompt an adverse reaction. Therefore, some companies send review requests at a later point to revive customers’ memory of their experience.

Reviews.io, a company that helps brands build customer trust by collecting, managing, and publishing reviews, contends that timing is the most important factor in collecting reviews. They advise ecommerce businesses to schedule review requests between 7 and 30 days after order fulfillment, depending on the characteristics of an industry. For example, Judge.me offers a service to retail sellers that sets the number of days following order fulfillment before a review request is sent.

Advertisement

However, consumers’ reactions and memories are influenced by the temporal distance between a product experience and reminder. The likelihood of writing a review decreases as time passes because consumers’ recall becomes blurry. This is more of a reason for companies to find the fine balance between asking for reviews too soon and waiting too long—both of which affect the quality of reviews.

Sooner Is Not Necessarily Better

Our research team performed two randomized field experiments with over 300,000 consumers from online marketplaces offering different types of products. The first experiment involved consumers from South Korea’s largest online travel marketplace, where consumers can book flights, hotels, and guided tours using the company’s website or mobile app. We designed four distinct timing classifications for review reminders: next day, five-day, nine-day, and 13-day intervals after the product experience. We randomly assigned consumers to the treatment group (which received a review reminder) or control group (which did not receive a reminder) for each timing classification.

For the second field experiment, we studied consumers in a major South Korean online apparel marketplace. We created four distinct timing classifications but with different time intervals than the first experiment. Across both experiments, we investigated the temporal effects of review reminders on the quality of the reviews.

Our findings demonstrate that requesting a review as soon as possible is not the best strategy. We find that reminders cause problems when they are sent faster than the number of days it takes, on average, for customers to write a review. For example, if a customer orders clothing online, it is too early to send a review reminder the day the product is delivered because people need sufficient time to try the item on and evaluate its quality.

Lessons for Chief Sales Officers

  1. Even though the standard for when it is too early may vary by product type and customer heterogeneity, we anticipate that it may be acceptable to send an early reminder in the case of search goods (e.g., paper towels, bottled water, and canned soups) because consumers have a clear understanding of the products and a high degree of certainty that it will be useful after an initial trial. In contrast, for experience goods (e.g., restaurants, beauty salons, travel), it may be prudent to provide consumers enough time to evaluate the product before sending a review reminder.
  2. Furthermore, our results indicate that overly quick reminders are particularly detrimental for businesses with young consumers. For example, Generation Z has always used digital platforms and is independent and pragmatic. In this sense, prompt reminders may be prone to violating their autonomy and freedom. In other words, the negative impact of an immediate review reminder may be disproportionately greater for younger individuals.
  3. As for the impact of review reminders on review content, we find delayed review reminders can alleviate the poor quality of delayed reviews. However, except for review specificity, the timing of review reminders has a negligible effect on review content such as ratings, sentiment, or length. In other words, the content of the reviews does not change between those who wrote them after the reminder and those who wrote them without the reminder.

Our big lesson for online marketplaces is that it is counterproductive to blindly adopt “faster is better” or “one-size-fits-all” approaches. Instead, companies should reevaluate their current practices and adjust the timing of review reminders to specific consumer target groups in order to elicit more consumer feedback.

From: Miyeon Jung, Sunghan Ryu, Sang Pil Han, and Daegon Cho, “,” Journal of Marketing.

Go to the Journal of Marketing

The post When to Ask Customers for Feedback [Best Practices] appeared first on .

]]>
113123