|Part of a series on|
|2.||Direct Mail Donor Acquisition|
|3.||Direct Mail Cultivation|
|7.||Thank You Note|
|7.||Direct Mail Books|
|8.||Direct Mail Case Studies|
|9.||Direct Mail Vendors|
Direct mail fundraisers like numbers. Percentages. Data. They have a good reason for this. Direct mail fundraising is essentially a numbers game. Given a particular mail piece, success is determined by the number of people who receive it and the percentage of those people who respond with a gift. The number of gifts divided by the number of mail pieces gives you the response rate. In direct mail, response rate is king.
For this reason, experienced direct mail fundraisers like to do test mailings. A test mailing is a sample piece that is sent to a small number of donors to gauge response rates. Often, fundraisers will do what’s called an “A/B Test” where two different mailings are sent to a small group of their donors. They can then measure donor response to each of these mailers to see which letter is objectively better.
Why go to the trouble? If you’re planning on sending a letter to 100,000 people, the difference between 1.5% response rate and 1.8% is 300 donors. If the average gift is $50, those 300 donors will bring in an additional $15,000. Little percentages can make a big difference.
Statistics power test mailings.
The reason that this works is that you’re dealing with statistics and probability. The science of statistics tells us that if we ask a question to a small sample of a larger group of people, the pattern of their responses will be similar to the pattern of responses in the larger group. So if 2% of prospects sent the test mailer decide to donate, it is likely that 2% of the larger prospect list will also decide to give.
In order for you to get valid test results, you need to be mailing to a statistically valid sample. That means that you can’t send a test mailing to 10 of your close friends. Their relationship with you and the small number of potential donors will make the information completely meaningless.
Since response rate percentages tend to be relatively small (0-5%), you will need to send your mailing to enough donors to be able to get useful data. If you only test on 100 potential donors, every response that you get will increase your response rate by 1%. If 5 out of 100 give, then you have a 5% response rate. This is not really helpful, because it’s difficult to compare two different response rates with any kind of accuracy.
If you mail the piece to 1,000 potential donors, you will get information that has a higher level of accuracy and you’ll be able to gauge which of your two pieces are more effective.
You’ll also want to try to create your two test groups in a way that isn’t biased. For example, you don’t want to test only people in a high income neighborhood with test A and only people in a low income neighborhood with test B. The results will be skewed because your test subjects are not representative of your whole prospect pool.
In creating your test groups, you’ll want to try to create lists that are representative of the whole, more or less randomly selected. If you’re working with a direct mail vendor, they should have the capability of creating valid test lists for you. If you’re trying to do it in house, you might try random sampling like taking every 10th name from your donor list when sorted alphabetically by zipcode to create each of your lists.
What to test?
You can test in two ways. The first is on the global level the second is fine tuning.
Global level testing means that you are trying two totally different approaches to find out which one connects with your audience. The pieces might look totally different, have different content, and different asks. You might choose to do global level testing if you are just starting out or trying to change the overall look of your company brand.
If you’re planning to do global level testing, it might be useful to test more than two different packages, despite the additional cost of creating the different direct mail pieces. You should probably limit yourself to 3 or 4 different test pieces, however, because of the time and cost involved.
Fine tuning with testing compares two pieces that are essentially the same, but with differences in small details. You might change the wording on your ask, for instance. Or you might put a different picture on the envelope. These small changes can sometimes make a significant difference in response rate. Remember that 0.1% difference in response rate can make a big difference in the overall success of the mailing.
Some items that you might A/B test when you’re fine tuning include:
- Envelope color.
- Picture on the envelope.
- Phrase on the envelope.
- Response device.
If you’re trying to do a test mailing yourself, it’s important that you have a way to track the results of your test. That means that you need to mark your response devices differently so you can identify which package the donor received. Tracking this data is absolutely essential for getting meaningful information.
The Final Test.
The best heads in direct mail see this type of fundraising as a process that is repeated over time. You’ll never create the perfect direct mail piece that gets 110% response. It just isn’t going to happen. So you’re going to have to eventually mail the whole list with your best piece.
But that doesn’t mean that it’s all over after one mailing. There is no hard rule in fundraising that says that you have to create an entirely new direct mail piece for every year. You can start with last year’s package and do a new test to see if you can get another couple tenths of a percent out of it next year.
The reason you can reuse direct mail pieces like this is simple. Most people who get your direct mail are not opening it. Like 95% or more. So for 95% of the people out there, that tired old piece that you sent last year is something brand new. They might open it this year because it catches them after their coffee, or they just read a good book that stirred their hearts to charity. Who knows?
Continual testing means constantly improving your connection to your audience. With each round of testing, you’re selecting the winner, so the piece should gradually get better and better. If you’re response rate drops through the floor with a piece, you can still go back to an earlier version to test whether it’s the piece itself, or just that version of the piece that failed to connect to their audience.
Testing tells you if the changes that you’re making are actually improvements. It takes the guesswork out of direct mail and enables you to make sound, evidence based decisions.