Marketing Tech and Measurement — Lesson 5

A/B Testing, Attribution, and Data-Driven Decisions

14 min read

Learning Objectives

  • 1Design valid A/B tests that produce reliable insights.
  • 2Understand attribution models and their limitations.
  • 3Make marketing decisions using imperfect data responsibly.

A/B testing fundamentals

An A/B test compares two versions of something — a landing page, an email subject line, an ad — to determine which performs better. Half of your traffic sees version A and half sees version B. After enough data accumulates, you can determine which version produces better results with statistical confidence.

Valid A/B tests require: a clear hypothesis ("changing the CTA from Sign Up to Start Free Trial will increase clicks"), sufficient sample size (hundreds or thousands of observations, not dozens), a single variable changed between versions (not multiple changes at once), and enough time to account for daily and weekly patterns.

Common testing mistakes: ending the test too early based on exciting initial results, changing multiple elements at once and not knowing what caused the difference, testing trivial elements while ignoring high-impact ones, and not implementing winning variations permanently.

Attribution: an imperfect but necessary science

Attribution assigns credit for conversions to marketing touchpoints. When a customer sees a social ad, reads a blog post, clicks a search ad, and then converts, which touchpoint gets credit? The answer depends on the attribution model.

Last-click attribution gives all credit to the final touchpoint before conversion. First-click gives credit to the first touchpoint. Linear distributes credit equally across all touchpoints. Time-decay gives more credit to touchpoints closer to conversion. Each model tells a different story about what is working.

No attribution model is perfectly accurate. Multi-touch customer journeys are complex, cross-device behavior is hard to track, privacy changes limit visibility, and offline interactions are often invisible to digital attribution. Accept attribution as directional guidance, not precise measurement.

The practical approach: use attribution to identify channels that are clearly working or clearly not working. Do not make fine-grained budget shifts based on small attribution differences. Supplement digital attribution with customer surveys ("how did you hear about us?") and conversion analysis.

Making decisions with imperfect data

Marketing data is never complete or perfectly accurate. Tracking has gaps. Attribution models simplify complex journeys. Customer surveys are unreliable. Reports from different tools disagree. The question is not "is the data perfect?" but "is the data good enough to make a better decision than no data?"

Triangulate: look at the same question from multiple data sources. If analytics, CRM data, and customer surveys all point in the same direction, the signal is probably real even if no single source is perfectly accurate.

Document your assumptions. When you allocate budget based on attribution data, note the model used, the known limitations, and the confidence level. This creates accountability and makes it easier to revisit decisions when better data becomes available.

Case Study

The test that saved $200,000

Situation

A company was about to redesign their entire website based on declining conversion rates. Before committing to the redesign, they ran A/B tests on the existing page. Test 1: changing the headline increased conversions 15%. Test 2: reducing form fields from eight to four increased conversions 35%. Test 3: adding a customer testimonial above the fold increased conversions 12%. Combined improvements brought the conversion rate above target without a redesign.

Analysis

Three focused tests costing $2,000 in time and tools produced more improvement than a $200,000 redesign would have. The tests also revealed that the problem was not the design — it was the headline, form friction, and lack of trust signals. A redesign would have been expensive and might not have addressed the actual issues.

Takeaway

Test before you rebuild. Small, focused experiments often reveal that the problem is not what you assumed, and the fix is simpler than starting over.

Reflection Questions

  • 1. Has your organization ever made a major marketing decision based on data? How confident were you in that data?
  • 2. If you could test one element on your website or in your marketing right now, what would it be?

Key Takeaways

  • Valid A/B tests require a clear hypothesis, sufficient sample size, and a single variable.
  • Attribution models are directional guides, not precise measurements.
  • Triangulate from multiple data sources rather than relying on any single metric.
  • Test small changes before committing to expensive redesigns or overhauls.