Experiments Illustrated: Can $1 Change Behavior More Than $100?

A small prize for something easy vs a big prize for something difficult? The post Experiments Illustrated: Can $1 Change Behavior More Than $100? appeared first on Towards Data Science.

Mar 11, 2025 - 20:28
 0
Experiments Illustrated: Can $1 Change Behavior More Than $100?

I currently lead a small data team at a small tech company. With everything small, we have a lot of autonomy over what, when, and how we run experiments. In this series, I’m opening the vault from our years of experimenting, each story highlighting a key concept related to experimentation.

And here we’ll share a surprising result from an early test on our referral-bonus program and use it to discuss how you might narrow your decision set for experiments (at least when they involve humans).

Background: It’s COVID and we need to hire a zillion nurses 

IntelyCare helps healthcare facilities match with nursing talent. We’re a glorified nurse-recruiting machine and so we’re always looking to recruit more effectively. Nurses come to us from many sources, but those who come via referrals earn higher reviews and stay with us longer. 

The year was 2020. IntelyCare was a baby company (still is by most standards). Our app was new and most features were still primitive. Some examples…

  • We had a way for IntelyPros to share a referral link with friends but had no financial incentives to do so. 
  • Our application process was a major slog. We required a small mountain of documents for review in addition to a phone interview and references. Only a small subset of applicants made it all the way through to working. 

During a recruiting brainstorm, we latched onto the idea of referrals and agreed that adding financial incentives would be easy to test. Something like, “Get $100 when your friend starts working.” Zero creativity there, but an idea doesn’t have to be novel to be good. 

Knowing that many people might refer again and again if they earned a bonus, and knowing that our application process was nothing short of a gauntlet, we also wondered if it might be better instead to give clinicians a small prize when their friends start an application.

A small prize for something easy vs a big prize for something difficult? I mean, it depends on many things. There’s only one way to know which is best, and that’s to try them out. 

The referral test

We randomly assigned clinicians to one of two experiences:

  1. The clinician earns an extra $1/hour on their next shift when their referral starts a job application. (Super easy. Starting an application takes 1–2 minutes.)
  2. The clinician earns $100 when their referral completes their first shift. (Super hard. Some nurses race through it, but most applicants take several weeks or even months if they finish at all).

We held out an equal third of clinicians as a control and let clinicians know the rules via a series of emails. There’s always a risk of spillovers in a test like this, but the thought of one group stealing all the referrals from the other group seemed like a stretch, so we felt good about randomizing across individuals. 

Decidedly non-social: Many people hear these two options and ask, “Did you think of trying prosocial incentives?” (Example: I refer you, you do something, we both get a prize). Studies show they’re often better than individual incentives and they’re quite common (instacart, airbnb, robinhood,…). We considered these, but our finance team became very sad at the idea of us sending $1 each to hundreds of people who may not ever become employees. 

I guess Quickbooks doesn’t like that? At some point, you just accept that it’s best to not mess with the finance team. 

Since the $1/hr reward could not be prosocial without becoming a major headache, we limited payouts in both programs to the referring individual only. This gives us two referral programs where the key differences are timing and the payout amount.

Turns out timing and presentation of incentives matter. A lot. Social incentives also matter. A lot. If you’re trying to growth-hack your referral program, you would be smart to consider both of these dimensions before increasing the payout. 

Nerdy aside: Thinking about things to test 

Product data science, with minimal exception, is interested in how humans interact with things. Often it’s a website or an app, but it could be a physical object like a pair of headphones, a thermostat, or a sign on the highway. 

The science comes from changing the product and watching how humans change their behavior as a result. And you can’t do better than watching your customers interact with your product in the wild to learn whether a change was helpful or not. 

But you can’t test everything. There are infinite things to test and any group tasked with experimenting will have to cut things down to a finite set of ideas. Where do you start? 

  • Start with the product itself. Ask people who are familiar with it how they like it, what they wish was different, Sean Ellis, NPS, the Mom Test, etc. This is the common starting point for product teams and just about everyone else. 
  • Start with human nature. For many decades Behavioral Scientists have documented particular patterns in human behavior. These scientists go by different names (behavioral economists, behavioral psychologists, etc.). 

In my humble opinion, the 2nd of these starting points is severely underrated. Behavioral science has documented dozens of behavior patterns that can inform how your product might change most effectively. 

A few honorable mentions…

  • Loss aversion: people hate losing more than they like winning
  • Peak-End: people remember things more favorably when they end on a positive note 
  • Social vs Market Norms: everything changes when people pay for goods and services instead of asking for favors
  • Framing: people make choices based on how information is presented
  • Left-digit Bias: perceptions of a price are disproportionately influenced by the leading digit ($0.99 =                         </div>
                                            <div class= Read More