CBCreatorBenchmarks
Methodology · How the score works

The score, in full.
No black box.

Every number on a benchmark report is computed from public profile data using the formulas below. If a method is wrong, we fix it in public — see the changelog.

01Score

The 0–100 overall is a weighted average

The overall score blends four sub-scores using fixed weights: Engagement 35%, Consistency 25%, Growth 20%, Authenticity 20%. Each sub-score is itself a 0–100 value computed against the same cohort, so a 70 in Engagement and a 70 in Authenticity are comparable on the same scale.

Weights are global, not personalized. We don’t adjust them per niche or per follower tier. The thing the score measures is the same thing for everyone, by design.

02Cohort

The benchmark is your peers, not the platform average

A creator’s score is computed against a peer cohort, not against the entire platform. The cohort is the intersection of:

  • Platform — Instagram and TikTok are scored separately.
  • Niche — auto-detected from bio keywords and hashtag patterns. You can override the detected niche on the report; the cohort updates with you.
  • Follower tier — nano (1K–10K), micro (10K–100K), mid (100K–1M), macro (1M–10M), mega (10M+).

We require at least 200 creators in a cohort before we ship a benchmark for it. Below that, we widen the tier band (e.g. micro + mid combined) and label the report with the wider cohort explicitly. We never silently fall back to platform-wide averages.

03Engagement

Real ER from real posts, not the back-of-envelope formula

Engagement rate is calculated as (avg_likes + avg_comments + avg_shares) / followers × 100, averaged over the last 30 to 60 posts on Instagram or 30 videos on TikTok. We exclude pinned posts to avoid skew.

Engagement sub-score is the percentile rank of that ER inside the cohort, mapped onto 0–100. A creator at the cohort median scores 50 on this pillar regardless of the absolute number.

04Consistency

Cadence holds, gaps don't

Consistency is a function of two things: posts per week (relative to cohort median) and the standard deviation of posting gaps over the analyzed window. A creator who posts 4 times a week every week scores higher than one who posts 8 times one week and 0 the next, even if the totals match.

We don’t penalize intentional pauses for small accounts as hard as we do for larger ones. Below 10K followers, the consistency floor is more forgiving — most nano creators have day jobs.

05Growth

Trajectory, not absolutes

Growth compares 90-day follower delta to the cohort’s 90-day median delta, both in absolute and percentage terms. The two are blended so a stalled mega-creator (huge audience, no growth) doesn’t outrank a rising micro creator on raw scale alone.

Negative growth is not automatically penalized for accounts above 1M followers, where attrition is statistically normal. We say so explicitly on the report when this rule applies.

06Authenticity

Inferred signals, not a verdict on follower validity

Authenticity reads four public signals: like-to-comment ratio, comment depth (average length, emoji-only ratio), engagement variance across recent posts, and follower-to-following ratio. Each is checked against the typical range for the same cohort.

Important: the report says “sits within the typical range” or “outside the typical range,” never “X% fake followers.” We can’t see private follower lists, so we don’t pretend to. The signals are a calibrated sanity check, not a definitive verdict.

07What we don't do

Things this report deliberately doesn't claim

  • We don’t score private accounts.
  • We don’t scrape DMs, Stories, or any non-public surface.
  • We don’t identify individual fake followers by handle.
  • We don’t score brand-fit, content quality, or aesthetic.
  • We don’t produce a single “market value” — the sponsorship range is exactly that, a range.
  • We don’t use the score to rank creators against each other in a leaderboard.
08Updates

When the methodology changes, you'll see it

Every change to a weight, formula, or threshold is recorded in the changelog with a date and a one-line reason. Historical scores are not retroactively adjusted unless the change is a bug fix; a methodology change creates a new score baseline going forward.