Skip to content
hey annahey anna
Back to blog
BrandGuides

Brand Sentiment Without an Agency: Reading Every Mention Across Platforms

Social listening tools cost a thousand a month and still need humans to read. Anna pulls every public mention across platforms, classifies each one, and rolls the signal up by week

By Anna·~8 min read·Updated May 16, 2026

The brand health slide is the slide nobody knows how to fill in.

Sales numbers have a source. Web traffic has a source. Ad spend has a source. Brand sentiment has a vibe, a screenshot of one particularly bad comment, and the marketing director's word that things are "trending in the right direction." When the board asks for evidence, the room goes quiet and somebody promises to share a deck after the meeting.

There is a category of tool that exists to fix this — the social listening platforms. They cost a thousand dollars a month or more, they ingest mentions across platforms, and they still hand the brand team a feed of raw comments with no clear answer to the question that started the conversation. The team pays for ingestion. The team still does the reading.

This is the gap Anna closes.

What an internal sentiment system actually has to do

Three jobs, none of which the existing tools do well.

It has to ingest every public mention across the platforms the brand cares about — TikTok, Instagram, YouTube, X, Facebook, Threads — and join them into one comparable dataset. A mention is a mention whether it is a TikTok stitch or a Facebook comment. The platform metadata stays, but the unit of analysis is the mention.

It has to read each one. Not surface them in a feed for a human to read. Actually classify the sentiment and the theme. "Product quality" is a different conversation from "shipping," and rolling them together is the reason most brand-health summaries say nothing.

It has to roll the result up by week, by theme, by sentiment, so the brand team can see where the curve is bending and act on it before a one-off complaint becomes a viral thread. Trend, not snapshot.

The combination of those three jobs is what a brand-health report should be. Most teams have none of them in working order.

What Anna does

Anna pulls the mention stream across the platforms the brand connects. For most brands this is some combination of TikTok, Instagram, YouTube, X, Facebook, and Threads. Public mentions only — Anna does not read DMs and does not need to. The catch is volume: a mid-sized brand generates 800-2,000 public mentions in a normal week, and that climbs sharply during a launch or a controversy.

She pulls the trailing 12 weeks by default, which is the window that lets a weekly trend stabilise. The mention text, the platform, the author handle, the timestamp, the reach estimate, and a permalink go into the user's dataset.

Then she enriches each row with two =AI() columns. The first reads the mention text and classifies the sentiment as Positive, Neutral, or Negative. The second classifies the theme — Product quality, Pricing, Shipping, Vibe, Packaging, or Other. The formulas persist in the user's workbook. The brand team can see them, edit the theme list, and re-run the columns when the schema needs to evolve. This is the auditable layer most listening tools refuse to expose.

Here is what the enriched mention dataset looks like, sampled:

PlatformAuthorExcerptThemeSentimentReach
TikTok@meadowsandmossordered the heavy throw three weeks ago and dhl still hasn't moved it past frankfurt 😭ShippingNegative18400
Instagram@noor.studiesthe linen feels so good on a hot day, this is the third one I have boughtProduct qualityPositive6200
X@hugo_linhonestly the packaging is gorgeous but it's like four boxes for one t-shirt, feels excessivePackagingNegative2800
YouTube@hauls-by-raeunboxing is part of the experience for me — the tissue paper, the card, all of itPackagingPositive41000
Threads@jordan.makeslove the brand but £180 for a sweater is wild for what it isPricingNegative940
FacebookPriya R.received my order in 4 days, everything as described, will buy againShippingPositive220
Each row is one public mention. Anna joins the cross-platform stream, classifies sentiment and theme on every row, and keeps the reach metadata so the rollup can weight by audience size.

A row-level read like this is already more than most brand teams have ever seen. But the row level is not the answer. The answer is what 12 weeks of rows look like rolled up.

The rollup

Anna builds the report against the enriched dataset. Three views, all designed to be read on one URL.

Mentions (12wk)
12,408
Net sentiment
+42
-18 vs prior 12wk
Fastest-rising negative theme
Shipping
+128% mentions

The headline number is the net sentiment score — positive mentions minus negative mentions, weighted by reach. It is moving in the wrong direction. The interesting part is which theme is dragging it down.

W1W2W3W4W5W6W7W8W9W10W11W12102030405060
Product qualityShippingPricingPackagingWeekMentions per week
Weekly mention volume by theme. The pattern is not the absolute level — it is the slope. Shipping and packaging are climbing; product quality is in slow decline.

Read the chart by slope, not by height. Product quality is still the largest theme by volume, but it is in a slow decline. Shipping mentions have more than doubled over the 12-week window. Packaging is on a steeper climb than anything else. Pricing is flat — that is a stable, unremarkable conversation.

The decline in product quality conversation is not necessarily bad news; it can mean the brand is no longer interesting, which is its own problem, or it can mean the product is doing its job and people have moved on. Anna will say which one it is by pulling sentiment alongside volume, which is the next chart.

Product qualityPackagingShippingPricingVibe0100200300400500
NegativeNeutralPositiveMentions (12 weeks)
Total mentions by theme over 12 weeks, broken down by sentiment. Shipping is the only theme with a majority-negative profile. That is where the report stops being descriptive and starts being a brief.

Now the picture sharpens. Shipping is not just a rising volume — it is a rising volume of negative mentions. 384 negative shipping mentions across the window, against 84 positive ones. That is a 4.5-to-1 negative ratio for a theme that 12 weeks ago barely registered. The board slide writes itself: shipping is the brand-health risk, and it is concentrated in one delivery partner the operations team can name.

What a brand team does with this on Monday

The first move is the easy one. The shipping data goes to operations. Anna can pull the negative shipping mentions into a sub-report and show which carrier, which region, and which order-size band the complaints cluster around. That report becomes a ticket for the ops lead, not a vague concern raised in a meeting.

The second move is the harder one. The brand team needs a position on shipping in their public communication. If the trend continues, the conversation will reach a tipping point where it becomes a story rather than a stream of individual complaints. Anna's report tells the brand team they have roughly four weeks before that happens — based on the slope of the negative shipping curve and the typical volume threshold where individual mentions start citing each other. The communications director can prepare a response before it is needed, not after.

The third move is the one that matters longest. The packaging climb is the more interesting curve. Packaging mentions are rising, and the sentiment is mixed — 188 positive, 218 negative. The conversation is genuinely contested. Anna will surface the actual mention text driving each side: positive mentions tend to be unboxing content from creators with large audiences; negative mentions tend to be sustainability and excess-material critiques from smaller accounts. The same packaging is being celebrated by one segment and criticised by another. That is a brand decision, not an operations decision. The report frames it.

The most defensible thing a brand-health report can do is name the conversation that is changing fastest. Volume answers "what are people talking about." Slope answers "what will people be talking about in six weeks if nothing changes." The slope chart is the one the team should look at first.

What the deliverable looks like

A URL. The brand lead drops it in the marketing channel before the Tuesday standup. The CMO opens it in the browser before her 9am with the founder. The customer experience lead pulls the shipping section into the operations channel. Nobody rebuilds a slide.

The report is opinionated and designed to be sent — metrics at the top, theme-trend chart in the middle, sentiment-by-theme chart beneath, Anna's commentary woven in between. The mention-level table sits below the rollups for anyone who wants to read the actual posts, with each row linking out to the source.

It is the kind of artefact the brand team can stand behind in front of the board. Not because Anna's analysis is unimpeachable — it is statistical, and the brand team is welcome to dispute the theme labels — but because the analysis is on the page, the sample is the full public mention stream, and the classification logic is editable.

Why this is not a $1,200-a-month listening tool

The legacy listening tools solved the ingestion problem and called it done. They are good at pulling mentions. They are bad at telling you what the mentions mean. A brand team using one of them still has to read the feed and write the summary themselves.

Anna's posture is the opposite. The ingestion is table stakes. The work is the classification, the rollup, and the framing. The deliverable is a report, not a feed.

This shifts the spend mix. A brand team can keep its existing listening tool if it wants the raw stream, and let Anna handle the analysis on top. Most teams find that once Anna's report runs weekly, the listening dashboard goes unopened.

Frequently asked questions

How is this different from a Brandwatch or Sprout dashboard?

Brandwatch, Sprout, and the other listening tools ingest mentions and present them in a feed with high-level sentiment counts. The classification is shallow and not editable. Anna pulls the same mention stream (or imports it from a tool the brand already pays for) and runs deeper, brand-specific classification — sentiment plus theme, with the theme list editable per brand. The output is a report rather than a feed.

What about private comments and DMs?

Anna only reads public mentions. DMs and private comments are out of scope by design — the system never asks for inbox access. If the brand team wants to layer customer-support tickets into the same view, that is a separate dataset Anna can join on, but it sits behind the team's existing helpdesk auth.

How accurate is the sentiment classification?

As an illustrative benchmark, sentiment classification of public English-language social comments typically lands in the high 80s to low 90s for agreement with a human reviewer on a held-out sample; theme classification usually runs lower because the boundary between, say, "Vibe" and "Product quality" is genuinely fuzzy. Your numbers will vary with the brand, the language mix, and the theme list. Anna flags low-confidence rows so the brand team can audit them, and the theme prompt is editable — that's the lever that usually closes the accuracy gap.

Can Anna run this for multiple brands or sub-brands?

Yes. The mention stream is just data; the classification runs the same way. For agencies running this across a client roster, each brand sits in its own dataset with its own theme schema. The report URL is per-brand. Multi-brand rollups are possible but usually less useful than per-brand depth — the conversations diverge enough that aggregating them flattens the signal.

See Anna's work

Anna ran this analysis on a real dataset — open the live report.

Open a live cross-platform engagement report Anna wrote on a real brand handle. Where the same content lands differently — surfaced automatically.

Open the live report →