FeaturesHow it worksIntegrationsPricingBlogFAQGet started
Back to blog
Product5 min read

Newsletter vs promotional email: why the same benchmarks mislead both

The single most common mistake we see in email analytics is comparing a newsletter to a flash sale and treating the gap as performance. Clusters fix the apples-to-apples problem without asking marketers to redo their taxonomy by hand.

Newsletter vs promotional email: why the same benchmarks mislead both

Open an ESP dashboard. You'll see a list of campaigns, ranked by send date, with open and click rates next to each one. The implicit message of the layout is: compare these numbers to each other.

Don't.

A weekly newsletter and a flash sale are not the same product. A win-back to dormant users and a welcome series to new ones are not the same product. The first thing good email operators do — every single one we've talked to — is squint at that list and mentally re-sort it into groups. The second thing they do is apologize for the sheet they're building to do it properly.

Why flat campaign lists are misleading

A campaign's performance is meaningful only relative to its peers. A 22% open rate is excellent for a re-engagement flight and below the floor for a welcome email. "Above average" and "below average" are statements about a cluster, not about a list — which is also why most A/B tests on email fail to teach anything: the comparison is wrong before the statistics run.

ESPs rarely surface this because their data model was never built around the concept. Campaigns are independent rows in a table. Any grouping is either manual (folders, naming conventions, tags you forgot to apply) or absent.

The predictable result: teams either stop using the dashboard entirely, or they over-trust it. Both failure modes are expensive. The first hides real wins; the second celebrates fake ones.

What a cluster actually is

At Sendlens, a cluster is a named group of campaigns that share enough structural similarity that comparing them tells you something real. Concretely: Newsletter, Onboarding, Win-back, Promotions, Announcements. The exact names don't matter. The principle does.

Three things make clusters useful rather than just another tagging scheme:

They auto-suggest from fingerprints. We already analyze every campaign across fifteen-plus structured fields. If three emails share the same layout shape, cadence, subject pattern, and CTA behavior, they almost certainly belong together. The system proposes the cluster; you accept, rename, or override.

They carry their own aggregates. Sent, open rate, CTOR, conversions, conversion rate — calculated across the cluster, not the whole account. Now "our newsletters are converting 1.4% on average" is a real sentence, not a rough approximation.

They hold status. A cluster can be pinned (a known-good baseline), watchlist (something we're testing), or archive (deprecated). That context survives team turnover. The next operator to sit in the seat doesn't have to rebuild the taxonomy from memory.

The part most analytics tools get wrong

When we first prototyped this, we tried to be clever. The first version auto-clustered aggressively and surfaced the results as the primary view. Users hated it. Not because the clusters were wrong — most of the time they weren't — but because they didn't own the clustering.

The fix was not more accuracy. It was more surface area for the human. Clusters are now a proposal. You see what the system thinks, you see why, and the default action is: accept, rename, or move one campaign at a time. The system does the hard part (figuring out what could belong together). The operator does the important part (deciding what should).

This matches something we saw at Linear's team and it's a principle we keep coming back to: autonomy works best when it's introduced gradually. We start with suggestions. We observe. We earn the right to automate.

What to do Monday morning

If your team is in the pre-cluster era, here's the smallest useful move: pick four groupings your team actually argues about — newsletter, onboarding, promo, win-back is a fine starting set — and commit to reporting on those groupings at your next marketing review. Don't worry about the tooling yet. Just change the vocabulary.

The dashboards will catch up.


The goal isn't more data. It's better comparisons. Clusters are the smallest change that moves the conversation from "what did this email do" to "what does our body of work look like, and where is it trending."

That's the question ESPs can't answer for you. It's the question your competitors' best operators are already answering.

Get started

Your emails already
hold the answers.

Connect an ESP. Let Sendlens do the rest. See the design patterns behind your best campaigns in under 10 minutes.