AI Analysis for Google Analytics: A Way Around the Hardest Tool in Marketing
Google Analytics has more of your traffic data than any other tool you own. It is also the hardest tool in your stack to get an answer out of.
You want to know which traffic source produced the customers worth keeping. The default report shows sessions. You change it to users. You add source / medium. You want a second dimension — campaign — and the chart breaks because GA4's standard reports cap at one secondary dimension. You open Explorations, the "real" interface. You build a free-form report. The dimension you need is paywalled. You give up and export six months of data into a spreadsheet that does not load.
GA4 has the answer. The interface was not built to give it to you.
Why GA is hard to use for analysis
There is a reason GA fluency reads like a job posting. The interface punishes you for asking the question you actually have:
- Standard reports show what happened, not why. They are dashboards, not analyses. You can drill, but only along the paths Google decided to expose.
- Explorations are powerful and almost unusable. Every meaningful question requires a free-form report, a funnel, or a path exploration — each of which has its own UI, its own dimension limits, and its own way of breaking when you add the third filter.
- The data model leaks into the interface. You should not have to know whether a value lives in
event_params.valueoruser_properties.lifetime_value. GA4 makes you know. - Cross-source analysis is not in the product. GA cannot see your Stripe revenue, your Shopify orders, or your Google Ads spend. The questions that span the stack — "which channel produced the highest LTV customers, not just the most sessions?" — require an export.
- Sampling, thresholds, and 14-month retention. Once you ask a deep enough question over a long enough window, GA quietly returns a sampled answer and warns you in fine print.
The realistic path is BigQuery export plus SQL. The realistic person to write that SQL is an analyst. Most marketers do not have one to hand work off to. So the question stops getting asked.
Connect Google Analytics
Anna connects to your GA4 property via Google OAuth, read-only, ninety seconds. Pick the property, confirm the scope, done.
Once she has access, she can read Reports, Properties, Metrics, Dimensions, Key Events, Audiences, Custom Dimensions, Custom Metrics, and Realtime Data. She runs the same APIs Explorations runs against — runReport, runPivotReport, runFunnelReport, runRealtimeReport, batchRunReports — without you needing to choose between them. You ask the question. She decides which API and which dimensions to use, and tells you which she chose.
This is the part that changes the workflow. The question goes in. The answer comes back. There is no intermediate report you have to build first.
The traffic-explanation prompt
The single hardest question to answer in GA — and the most useful — is what changed. GA's anomaly detection is rudimentary; it flags spikes but does not explain them.
Try the prompt that comes with the integration:
"Analyze Google Analytics and explain the biggest traffic, engagement, and conversion changes over the last 30 days."
Anna pulls the period-over-period deltas across sessions, engagement rate, conversion rate, and revenue. She decomposes each delta by source / medium, by landing page, and by device. She runs a quick test on whether each change is statistically meaningful or within your normal noise band. She returns a paragraph, not a chart: "Engagement rate fell 4.2 percentage points. The drop is concentrated in mobile users from paid social, specifically the campaigns that started running on April 14. Desktop and organic engagement are flat."
That is the answer. There is no Exploration you needed to build.
The funnel-leak prompt
The second-hardest question in GA is where is demand leaking before conversion. GA4's funnel exploration is fine if you already know your four steps. It is not fine for finding the segment that drops where you did not expect.
Use the integration's funnel prompt:
"Use Google Analytics to show which landing pages, channels, and paths are causing the biggest drop-offs before conversion."
Anna pulls the event sequence per user, computes drop-off rates per segment, and ranks segments by recoverable demand. The output looks like this: "Mobile users from Meta drop 38 percentage points more than desktop users from organic at the cart step. At your average order value of $84, that is roughly $11K of recoverable monthly revenue."
After the funnel answer, ask "what changed for that segment between the prior 60 days and the most recent 60 days?" Anna pulls the event-level diff and surfaces the specific landing page, the checkout step that started taking longer, or the campaign that drove worse traffic. That is the move from the drop is here to here is why.
The plain-English channel-quality prompt
Sessions are not the answer. Conversions are not the answer. The answer is which channels brought in customers worth keeping. GA cannot answer this on its own — it does not know about your post-acquisition revenue.
Pair GA with Shopify or Stripe and ask:
"Analyze Google Analytics and Shopify together to show which channels bring the highest-value customers, not just the most sessions."
Anna pulls the first-touch source for every converting user_pseudo_id, joins to customer records in your revenue source, computes 90-day revenue per acquired customer, and ranks the channels by quality rather than quantity.
A typical finding:
Paid social drove the most sessions. Email drove the most valuable customers per acquisition. The Exploration you would have built shows neither — it shows session count and conversion rate, and both point in the wrong direction here.
The cross-platform demand prompt
Pair GA with Google Ads or Search Console and the question shifts from "what is happening on my site" to "where is the gap between intent and outcome".
"Use Search Console and Google Analytics to show which queries drive qualified traffic and which pages are wasting organic demand."
Anna pulls Search Console queries, joins to GA4 landing-page behaviour, and ranks pages by demand wasted — high impressions, decent click-through, terrible engagement after the click. That is the SEO work list, ranked by recoverable revenue rather than traffic volume.
"Compare Google Ads with Google Analytics to show which campaigns bring high-intent traffic and where performance falls apart after the click."
Same shape, paid side. The campaigns that look efficient at the platform layer often fall apart at the on-site funnel. Anna sees both layers.
The weekly narrative
Replacing the Sunday-night reporting cycle is the easiest win. Try:
"Write a weekly Google Analytics summary with the trends that matter, why they changed, and the next actions to take."
Anna writes the narrative in plain English. Methodology visible. No screenshots. No "I think this might be because of…" — she runs the diff and tells you what shifted.
What you stop doing
Connect GA4 once, optionally pair it with your revenue source and ad platforms, and the next-Tuesday cycle goes away. You stop:
- Building three near-identical Explorations because each one only holds two dimensions
- Exporting last quarter's traffic data to Sheets to stitch it to a Stripe export
- Asking your engineer to set up BigQuery so you can write the SQL you never had time to learn
- Telling stakeholders "I think paid social is working" because the dashboard cannot answer the LTV question
You start asking the property the question and getting the answer.
One prompt to start
If you are going to ask only one question of GA this week, ask this:
"Compare the channels GA4 says drove the most sessions last month to the channels that produced the customers with the highest 90-day revenue. Where do they disagree?"
The gap between those two lists is where your next quarter of marketing budget should move.
Connect Google Analytics. Paste the question. Try it at heyanna.studio.
We use cookies to improve your experience. Privacy policy