Skip to content
hey annahey anna
Back to blog
CreatorsTutorials

TikTok Niche Benchmarking: How Anna Reads 50 Competitor Accounts in 10 Minutes

Manually reviewing 50 peer creators in your niche takes hours and is wrong by next week. Anna pulls the public data, segments by follower tier and posting cadence, and gives you a benchmark report that refreshes on one click

By Anna·~7 min read·Updated May 16, 2026

The way most creators answer "how am I doing in my niche" is the same way they did it in 2021. Open TikTok. Tap into the search bar. Type the niche. Scroll. Click into the first account that looks comparable. Note the follower count. Skim the recent posts. Eyeball the like counts. Back, scroll, repeat.

After an hour you have a feeling. Maybe a spreadsheet with seven accounts in it. You stopped writing down post counts because there were too many. You skipped the engagement-rate maths because the numbers were dragging you out of the flow. By the time you closed the browser tab, the data was already two days stale.

A week later you do it again because you can't shake the suspicion that the niche moved without you.

The hidden question

The question creators actually want answered is not "what are my competitors posting." It is:

Among the fifty most-comparable accounts in my niche, what posting cadence and content themes are outperforming the median right now, controlling for follower tier?

That is a benchmarking question. It has a real answer. The reason no creator asks it cleanly is the data work — pulling fifty accounts, computing per-post engagement rate, classifying each post by theme, tiering by follower count, weighting by recency — is too expensive to do by hand. So creators replace the analysis with vibes. Vibes are wrong reliably enough to be expensive.

What Anna does

Anna runs the benchmark in four steps. The work that takes a creator a long afternoon and produces nothing reusable takes Anna ten minutes and produces a report URL that refreshes on demand.

Step 1. Define the cohort once. You give Anna a list of fifty handles, or a niche keyword like "vertical-farming creators" or "midwest realtors", and let her enrich the seed list using the social-data integration. Either way the output is a roster: fifty TikTok handles tagged as peers. The roster persists in your dataset. It is the cohort.

Step 2. Pull the public data. Through the social-data integration Anna fetches each peer's recent 90 days of posts — caption, post date, plays, likes, comments, shares, saves, follower count at time of post. This is all public data; no scraping is involved. The fetch is a single tool call. Anna writes the result to a table you can see and re-run later.

Step 3. Compute the comparable metrics. Engagement rate has half a dozen definitions and creators argue about which is right. Anna picks one (likes + comments + shares + saves divided by plays), computes it per post, then aggregates per account. She also computes the posting cadence — actual posts per week, not what the bio claims — and segments every account into a follower tier so the comparison is fair.

Step 4. Classify content themes. This is the step that breaks every manual attempt. Anna adds an =AI() formula on the caption: =AI("Classify this TikTok caption into one of: before/after transformation, stitch reaction, day-in-life vlog, talking-head explainer, product walkthrough, trend mimicry, other", caption). The formula persists in the data. New posts get classified automatically when the dataset refreshes. The taxonomy is yours to adjust.

The output is a report.

What the report looks like

Anna delivers one URL with three views. They answer three sub-questions of the main benchmarking question.

Peer accounts analysed
50
last refresh: 2 days ago
Your tier (50K–250K)
−0.4pp
vs cohort median ER
Outperforming theme
Before/after
+6.1pp above baseline

The first view tiers the peer set by follower count and shows where you sit. The point of the tiering is that engagement rate scales inversely with follower count — comparing a 25K creator to a 1.5M creator on raw ER is a category error. Anna does it correctly.

Follower tierPeers in cohortMedian ERTop 10% ERPosts / weekYour gap
10K – 50K followers148.4%14.2%4.6+1.2pp
50K – 250K followers195.7%9.8%5.1−0.4pp
250K – 1M followers113.9%7.1%6.2N/A
1M+ followers62.6%5.4%5.8N/A
Illustrative engagement rate by follower tier across the peer cohort. The 'Your gap' column compares your account to the median in your own tier — the only fair comparison.

The second view ranks content themes against the cohort baseline. The baseline is the median ER across all themes in your tier. Themes that exceed the baseline by more than a percentage point are flagged as opportunities; themes that underperform by more are flagged as drags.

Content themePeer posts (90d)Median ERvs niche baselineVerdict
Before/after transformations14211.8%+6.1ppOutperform
Stitch reactions to trends2187.4%+1.7ppOn par
Day-in-the-life vlogs965.2%−0.5ppOn par
Talking-head explainers3044.1%−1.6ppUnderperform
Product walkthroughs1673.6%−2.1ppUnderperform
Pure dance / trend mimicry792.8%−2.9ppUnderperform
Illustrative content theme performance against the niche baseline. Before/after transformations are the cohort winner. Talking-head explainers and product walkthroughs are reliable underperformers.

The third view is the one creators stare at the longest: cadence vs engagement rate, computed across the peer set.

123456789101112131423456789
Posts per week (peer cohort)Median engagement rate (%)Peak: 7 / week
Illustrative relationship between posting cadence and engagement rate across the peer cohort. Median ER peaks at roughly seven posts per week, then declines as cadence rises further.

The shape matters. Cadence helps until it doesn't. Most niches have an inverted-U: too few posts and the algorithm forgets you, too many and your own audience burns out. The peak position is niche-specific. In the example above it sits around seven posts per week. In another niche it could be three. The benchmark tells you what your cohort's curve looks like — not what some podcast guest claimed last year.

What an operator does with this

The report is a planning document, not a vanity print-out. Three actions follow from a good benchmark.

The first action is the schedule. If the cohort peak is at seven posts a week and you're posting three, you are giving away signal. If you're at twelve and the peak is at seven, you're burning yourself for nothing. Adjust toward the peak.

The second action is the theme mix. The bottom of the theme table — talking-head explainers, product walkthroughs — is the work you instinctively want to do because it converts. The top of the table — before/after, stitch reactions — is the work that grows the audience that you then convert. Most creators get this backwards and wonder why their conversion content has no audience to convert. The benchmark tells you the ratio.

The third action is the refresh. The cohort isn't static. New creators enter the niche; old ones quit. Trends rotate. The peer set decays in usefulness within a quarter. Anna refreshes the underlying data on schedule or on demand. The report URL doesn't change — you paste it in your group chat or your monthly review doc, and the charts inside it are always current.

What you stop doing is the Tuesday-morning scroll. The hour you used to spend opening fifty TikTok profiles in tabs and trying to hold the numbers in your head is gone. The data is in a dataset. The dataset feeds the report. The report tells you what to post next. You go and post it.

Frequently asked questions

How does Anna get TikTok data without scraping?

Anna uses a licensed social-data integration that exposes public TikTok metrics — captions, plays, likes, comments, shares, saves, follower counts — through an API. Only public data is included. Private accounts, hidden like counts, and view counts the creator has restricted are not available, and Anna will tell you so explicitly in the report when a peer's data is partial.

How many peer accounts do I need for the benchmark to be meaningful?

The minimum is roughly twenty peers split across two follower tiers. Below that, the median and the 90th percentile both wobble too much to act on. Fifty is the sweet spot — enough statistical confidence to call themes winning or losing, small enough that you can sanity-check the cohort yourself. Anna will flag a cohort that's too small and recommend adding accounts.

Does the analysis work for Instagram Reels or YouTube Shorts as well?

The same methodology applies. Engagement-rate definitions vary by platform — Instagram does not expose play counts to the API in the same way TikTok does, so the denominator is different — but Anna handles those platform-specific quirks internally. If you want a multi-platform benchmark, ask for it in the same conversation; Anna will run the cohort on each platform with the appropriate metric.

How fresh is the benchmark, and how do I update it?

By default the underlying dataset is the most recent 90 days of peer activity. Refreshing the social-data integration pulls the latest posts and Anna recomputes the medians and the theme classifications automatically. The report URL is stable, so anyone you shared it with sees the updated numbers without you having to re-send anything.

Can I see which specific peer post drove a theme's outperformance?

Yes. Click into any theme row in the report and Anna lists the top-performing peer posts under that theme — caption, ER, post date, and a link to the original. This is the part of the analysis that's hardest to do manually and most useful for planning: the qualitative example sits next to the quantitative pattern.

See Anna's work

Anna ran this analysis on a real dataset — open the live report.

Open a live peer creator benchmark Anna ran on real engagement data. Posting cadence, topic clustering, what formats are working — without the manual review.

Open the live report →