The best relationships have a give and take. Likewise, the best tech stack for your business should be a two-way street. Companies get value from being able to microtarget, personalize and automate with martech, but like going to lunch with a friend and telling them a good story, you don鈥檛 get up and walk away when you鈥檙e done talking. You sit and listen to what your friend has to say.
That鈥檚 why technology solutions and services that facilitate customer discovery鈥攕ocial media marketing, web analytics and optimization鈥攁re this year鈥檚 top investments for high-revenue companies, .
Behavior data is one level of customer discovery, but an even more effective practice is influencing what customers will do and seeing how the changes you make affect real customer decisions. This practice can lead companies to better serve their customers. As Lynn Hunsaker wrote on Customer Think, 鈥淭hankfully, many companies have been migrating away from product-centric, month-/quarter-end-centric and competitor-centric marketing toward putting the individual or most profitable customer at the center of marketing design and delivery.鈥
Accurately measuring the effects of your changes on customer behavior requires rigorous experimentation, but experimentation doesn鈥檛 necessarily come natural to most marketers. We鈥檙e creatives and businesspeople, not scientists.
To squeeze the most value and customer discoveries from your marketing technology, you need to think in a radically different way and overcome the three biggest hurdles that brands struggle with in marketing experimentation.
1. Getting out of a Rut
Marketers often fall into a checklist mentality with their testing. Testing technology is quite simple at its core: It splits traffic to a brand and measures visitor behavior. Even advanced multivariate technology that tests multiple combinations can limit experimentation to changing many similar variables (for example, button colors, headlines or images). If you don鈥檛 test the right things, experimenting won鈥檛 change much. If you test what you already know, you will not discover anything new. What matters most is what you decide to test.
Marketing experimentation is the perfect time to 鈥渉ave no respect for the status quo,鈥 as Apple鈥檚 鈥淭hink Different鈥 ad said鈥攖o see things differently.
(thought tools) can help you see different elements of your marketing message in a new light by breaking down the factors that influence conversion. For example, is the MECLABS Institute conversion sequence heuristic.
One brand that broke out of its box is HealthSpire, a startup within Aetna. HealthSpire wanted to use a landing page to generate leads for its call center. Its goal was to keep the landing page concise and avoid confusion. This isn鈥檛 unique to HealthSpire鈥攊t鈥檚 an assumption I鈥檝e heard from many marketing departments and advertising agencies: Customers want short landing pages. But HealthSpire decided to experiment with the conversion heuristic and test a longer page. HealthSpire hypothesized that the customer would be willing to deal with the increased friction from the longer page in exchange for decreased anxiety and increased clarity of the value proposition. The result was .
You鈥檙e not likely to see a big increase in performance if the messaging and creative you test is boxed into what you鈥檝e always done. Experimenting is your chance to challenge the status quo.
2. Having Too Many Answers
We have to change not just our opinions on what will work, but our resistance to new ideas as well. We need to approach our jobs in a new way. Like many of you, I鈥檓 a professional marketer. Marketers have big personalities and are great at arguing persuasively in pitch meetings, but arguments and guesswork will only get us so far. To be good experimenters, marketers will need to change their thinking; they鈥檒l need to ask more questions and be less sure of their answers to a marketing messaging problem.
Here鈥檚 a perfect example: 15 minutes before the start of an event, I got an email from a junior marketer about a recent A/B test. The results had just come in, and his headline beat our CEO鈥檚 headline. Our CEO, Flint McGlaughlin, was just about to get on stage and present the opening session for this event. I grabbed him and showed him the results just to tease him. Flint was humble enough to say, 鈥淭hat鈥檚 great. Let鈥檚 open the event with it,鈥 and proceeded to .
Society tells us to act as if we know everything, even if we have to bluff our way through. But the scientific method tells us that true strength lies in forming a testable hypothesis and taking a systematic approach to draw conclusions from evidence.
3. Accurately Answering Your Own Questions
Answers with evidence are very powerful, but you have to ensure that your evidence is accurate. Validity threats can skew the data gathered in your experiments, causing you to (very confidently) make the wrong decision. A validity threat makes it impossible to isolate the variable you鈥檙e changing鈥攕uch as a headline or an entire landing page approach鈥攖o measure its effect on customer behavior.
One example of a validity threat applicable to martech is called an instrumentation effect. An instrumentation effect might be a page that took longer to render because of something erroneously loading in the background, problems with the testing and analytics software or emails that don鈥檛 get delivered because of a server malfunction. You can鈥檛 be certain that it was the change you made to the messaging that caused different results or if it was something in the instrumentation you used to deliver and measure the message.
Another challenge can be the rigor of your experiments. For example, journalist about the questionable practices of Brian Wansink, head of the Food and Brand Lab at Cornell University.
Dahlberg writes, 鈥淲hen his first hypothesis didn鈥檛 bear out, Wansink wrote that he used the same data to test other hypotheses.鈥 Dahlberg quotes University of Pittsburgh statistician Andrew Althouse, who explains that studying lots of data is fine, but 鈥減-hacking鈥濃攚hen researchers play with data to arrive at results that look like they鈥檙e statistically significant鈥攊s a problem.
Biases don鈥檛 disappear the moment we decide to run an experiment. If we really want to find something, it鈥檚 human nature to find it. I drive a Nissan LEAF, and before I owned one, I never noticed LEAFs on the road. Now, I see them everywhere. I could conclude anecdotally that electric vehicle adoption is really taking off, but the likelier explanation is that the data didn鈥檛 drastically change, what I was looking for did.
That鈥檚 why it鈥檚 not enough to experiment; the way we run those experiments is critical.
鈥淔or scientific testing, it鈥檚 very important to remove all possible bias that could occur during the experiment. The goal of an experiment is to prove/disprove a hypothesis, not to find a statistically significant result within the data,鈥 says Cameron Howard, data specialist at MECLABS Institute.
Make Data-Driven Decisions with Well-Run Experiments
Experimenting is powerful. It鈥檚 the engine that drives our technological revolution. But it鈥檚 not enough to just run any marketing experiment and hope to get a result. You have to test elements that will truly affect conversion, take a question-based hypothesis approach and make sure you run a valid experiment to get reliable results.