Response to a Deliverability Rant
Justin Foster from WhatCounts, an email service provider based in Seattle, wrote a very lengthy posting about email deliverability on the WhatCounts blog yesterday. There’s some good stuff in it, but there are a couple of things I’d like to clarify from Return Path‘s perspective.
Justin’s main point is spot-on. Listening to email service providers talk about deliverability is a little bit like eating fruit salad: there are apples and oranges, and quite frankly pineapples and berries as well. Everyone speaks in a different language. We think the most relevant metric to use from a mailer’s perspective is inbox placement rate. Let’s face it – nothing else matters. Being in a junk mail folder is as good as being blocked or bounced.
Justin’s secondary point is also a good one. An email service provider only has a limited amount of influence over a mailer’s inbox placement rate. Service providers can and must set up an ironclad email sending infrastructure; they can and must support dedicated IP addresses for larger mailers; they can and must support all major authentication protocols — none of these things is in any way a trivial undertaking. In addition, service providers should (but don’t have to) offer easy or integrated access to third-party deliverability tools and services that are on the market. But at the end of the day, most of the major levers that impact deliverability (complaint rates, volume spikiness, content, registration/data sources/processes) are pulled by the mailer, not the service provider. More on that in a minute.
I’d like to clarify a couple of things Justin talks about when it comes to third-party deliverability services.
Ok, so he’s correct that seed lists only work off of a sample of email addresses and therefore can’t tell a mailer with 100% certainty which individual messages reach the inbox or get blocked or filtered. However, when sampling is done correctly, it’s an incredibly powerful measurement tool. Email deliverability sampling gives mailers significantly more data than any other source about the inbox placement rate of their campaigns. Since this kind of data is by nature post-event reporting, the most interesting thing to glean from it is changes in inbox placement from one campaign to another. As long as the sampling is done consistently, that tells a mailer the most critical need-to-know information about how the levers of deliverability are working.
For example, we released our semi-annual deliverability tracking study for the first half of 2005 yesterday, which (download the whitepaper with tracking study details here or view the press release here). We don’t publicly release mailer-specific data, but the data that went into this study about specific clients is very telling. Clients who start working with us and have, say a 75% inbox placement rate — then work hard on the levers of deliverability and raise it to 95% on a sampled basis, can see the improvements as their sales and other key email metrics jump by 20%. Just because there’s a small margin of error on the sample doesn’t render the process useless.