intermediate 25 minutes

How to Build a BOFU Content Engine That Drives B2B SaaS Pipeline Through AI Search

A data-driven execution guide for B2B SaaS teams: use Superlines to identify high-intent BOFU queries, research what competitors cite, produce comparison and alternatives content that AI engines extract, and measure the impact on pipeline.

Summarise with AI:

What this guide is for

Bottom-of-funnel content is the highest-ROI content type in AI search. When a buyer asks ChatGPT “What is the best CRM for startups?” or Perplexity “HubSpot vs Salesforce for mid-market,” the answer they get is a direct recommendation — not a list of links. The brand whose content gets cited in that answer wins the deal before the buyer ever visits a website.

This guide is the execution playbook. It shows you how to use Superlines data to identify exactly which BOFU queries to target, research what competitor content is winning citations, produce the articles, optimize them for AI extraction, and measure whether they are driving pipeline.

For general GEO automation workflows — weekly reports, competitive analysis, page optimization — see the GEO Automation Cookbook.

For auditing and optimizing individual pages — schema markup, content structure templates, and measurement — see How to Audit, Optimize, and Measure Content for AI Search.

This guide is specifically about the BOFU content production loop: find the query, research the competition, write the article, verify it works, repeat.


How to run this guide

Every step in this guide works two ways — through the Superlines dashboard and through the Superlines MCP server. The prompts throughout this guide are written for MCP users, but each step also includes a dashboard callout showing exactly where to click.

PathBest forSetup required
Superlines dashboardTeams that prefer a visual interface for analysis and reportingNone — log in at analytics.superlines.io
Superlines MCP serverClaude Desktop, Cursor, or any MCP client — run analysis from a chat promptOne-line SSE URL (see below)

Connect the Superlines MCP server (one step)

To use the MCP prompts in this guide, add the following URL to your MCP client’s configuration:

https://mcpsse.superlines.io?token=YOUR_SUPERLINES_API_KEY

Get your API key from Superlines Organization Settings → API Keys. It starts with sl_live_. Paste it into Claude Desktop (Settings → MCP Servers), Cursor (.cursor/mcp.json), or any other MCP-compatible tool. The Superlines MCP server exposes 26+ AI search analytics tools that the prompts in this guide call automatically.

If you need a step-by-step click-through for setup, see the Setup Guide for Non-Technical Users.


Before diving into execution, it helps to understand why BOFU content is disproportionately valuable in AI search specifically.

BOFU queries have the highest commercial intent. When someone asks “best project management tools for remote teams” or “Monday.com vs Asana pricing,” they are not researching a concept — they are making a purchase decision. AI models know this. They respond with specific recommendations, comparisons, and citations to authoritative sources.

BOFU content matches what AI models extract well. Comparison tables, “best for” summaries, FAQ answers, and pricing breakdowns are structured formats that AI models can lift directly into their responses. Unstructured blog posts and thought leadership pieces are harder for models to extract actionable information from.

BOFU queries are where competitors can be displaced. Top-of-funnel queries (“What is project management?”) tend to have entrenched winners with massive domain authority. BOFU queries are narrower, more specific, and less saturated. A well-structured comparison article targeting “Asana vs Monday.com for marketing agencies” can reach citation status faster than trying to rank for the broad category term.

The math: If 100 people per month ask AI assistants to compare tools in your category, and your content gets cited in 30% of those responses, that is 30 high-intent impressions from buyers who are one step from a demo. No ad spend. No SEO waiting game. Just content that answers the right question in the right format.


Step 1: Mine Superlines data for BOFU opportunities

The first step is finding which BOFU queries exist in your category and where you currently stand. This replaces guesswork with data.

Where to find this in Superlines: Go to Analytics → Content Opportunities to see a ranked list of topics with high query volume and low brand visibility. For decision-stage filtering, open Analytics → Competitive Benchmarking and filter by User Journey Stage → Decision. To discover new BOFU queries you are not yet tracking, use Prompts → Prompt Radar.

Find decision-stage content gaps

Find BOFU content opportunities for "YourBrand":

1. Use find_content_opportunities — show all topics where we have 
   high query volume but low brand visibility
2. Use get_query_data grouped by userJourneyStage — filter to 
   "decision" and "consideration" stages only
3. Use get_competitive_gap — find the specific prompts where 
   competitors outperform us, with AI analysis enabled

For each opportunity, show:
- The exact query text
- User journey stage
- Current brand visibility score
- Top competitor and their visibility score
- Whether this maps to one of the 4 BOFU types: 
  (1) Best/Top listicle, (2) Alternatives, (3) Comparison, 
  (4) Pricing/Review

Sort by commercial intent: comparison and pricing queries first, 
then best/top listicles, then alternatives.

Identify which BOFU types you are missing

Most B2B SaaS brands have significant gaps in at least 2-3 of the four BOFU content types. Use this prompt to map your coverage:

Analyze my tracked prompts for "YourBrand" and categorize them:

1. Get all query data with intent and journey stage
2. Tag each decision-stage query as one of:
   - TYPE 1 (Best/Top): queries like "best X for Y", "top X tools"
   - TYPE 2 (Alternatives): queries like "X alternatives", 
     "alternatives to X"
   - TYPE 3 (Comparison): queries like "X vs Y", "X compared to Y", 
     "which is better X or Y"
   - TYPE 4 (Pricing/Review): queries like "X pricing", "X review", 
     "how much does X cost"
3. For each type, show:
   - Number of tracked queries
   - Average brand visibility across those queries
   - Best and worst performing query
   - Top competitor for that type

Show the results as a coverage matrix:

| BOFU Type | Queries Tracked | Avg Visibility | Biggest Gap |

This tells you exactly where to focus. If you have zero tracked queries for “Alternatives” content, that is an entire category of buyer intent you are invisible for.

Expand your prompt portfolio for BOFU

If the coverage analysis reveals gaps, add new prompts to track before you start writing:

I need to expand my BOFU prompt coverage for "YourBrand". We are a 
[describe your product/category]. Our main competitors are 
[Competitor A], [Competitor B], [Competitor C].

Suggest 20 new BOFU prompts to track, covering all 4 types:

TYPE 1 (Best/Top) — 5 prompts:
Use templates like "best [category] for [use case/company size/role]"

TYPE 2 (Alternatives) — 5 prompts:
Use templates like "[competitor] alternatives", 
"alternatives to [competitor] for [use case]"

TYPE 3 (Comparison) — 5 prompts:
Use templates like "[our brand] vs [competitor]", 
"[competitor A] vs [competitor B]"

TYPE 4 (Pricing/Review) — 5 prompts:
Use templates like "[brand] pricing", "[brand] review", 
"[category] pricing comparison"

For each prompt, include the suggested label (type:best, 
type:alternatives, type:comparison, type:pricing-review).

Then add them using the MCP server. Here is a concrete example of the add_prompts call with BOFU labels:

Add these prompts for "YourBrand":

- "best project management tools for remote teams"
  labels: ["type:best", "bofu:active", "journey:decision"]
  intent: "commercial"

- "Asana vs Monday.com for marketing agencies"
  labels: ["type:comparison", "bofu:active", "journey:decision"]
  intent: "commercial"

- "Monday.com alternatives for small teams"
  labels: ["type:alternatives", "bofu:active", "journey:decision"]
  intent: "commercial"

- "Monday.com pricing 2026"
  labels: ["type:pricing-review", "bofu:active", "journey:decision"]
  intent: "commercial"

In the dashboard: Go to Prompts → Tracked Prompts → Add Prompt. Enter each query, then open the Labels field and type the label names directly. Use Prompts → Bulk Edit to apply labels to multiple prompts at once once your list grows.

After 2-3 days of data collection, you will have visibility scores for every BOFU query in your category. This becomes your prioritization data.


Step 2: Prioritize which articles to write first

Not all BOFU opportunities are equal. With limited content resources, you need to focus on the articles that will generate the most pipeline impact.

Where to find this in Superlines: Open Analytics → Competitive Benchmarking and filter by User Journey Stage → Decision to see every prompt where a competitor outperforms you at the decision stage. The Gap Score column shows how large the visibility gap is. For fan-out feasibility, open any tracked prompt and scroll to the Fan-Out Queries section to see what AI models search for when answering it.

The prioritization framework

Score each potential article on three dimensions:

DimensionWhat to measureWhere to find it
Buyer intentHow close to purchase is the query?Query type: pricing/review > comparison > alternatives > best/top
Competitive gapHow far behind are you vs the leader?get_competitive_gap — larger gaps = more room to gain
Citation feasibilityCan you realistically win this citation?Fan-out query analysis — do AI models search for content like yours?

Run the prioritization analysis

Prioritize BOFU content for "YourBrand". For each BOFU query I am 
tracking, score it on a 1-10 scale for:

1. INTENT SCORE: How commercially valuable is this query?
   - Pricing/review queries = 9-10
   - Head-to-head comparison queries = 7-8
   - Alternatives queries = 6-7
   - Best/top listicle queries = 5-6

2. GAP SCORE: How much visibility can we gain?
   - Use get_competitive_gap to find our visibility vs the leader
   - Larger gap = higher score (more room to improve)
   - If we already lead, score = 1 (maintain, don't prioritize)

3. FEASIBILITY SCORE: Can we win this citation?
   - Use get_fanout_query_insights to check what AI models search for
   - If fan-out queries match our product/expertise = high feasibility
   - If fan-out queries reference data we don't have = lower feasibility

Calculate PRIORITY = (INTENT × 0.4) + (GAP × 0.35) + (FEASIBILITY × 0.25)

Show the top 10 articles to write, ranked by priority score, with:
- Target query
- BOFU type (1-4)
- Priority score breakdown
- Recommended article title
- Top competitor to beat

The first batch

For your first month of BOFU production, pick:

  • 2 comparison articles (highest intent, most extractable by AI)
  • 1 alternatives article (captures competitor’s dissatisfied users)
  • 1 best/top listicle (broadest reach, good for topical authority)

This gives you one article per week, covering three of the four BOFU types. Add pricing/review content in month 2 once you see which comparison articles get traction.

Generate a BOFU-focused strategic action plan

After you have at least 2-3 weeks of data, use the generate_strategic_action_plan tool to get a priority-ranked BOFU roadmap in one call. This replaces the manual scoring above with data-driven recommendations:

Generate a strategic action plan for "YourBrand" with these settings:

- focusArea: "content_gaps"
- maxRecommendations: 10
- filter to decision and consideration journey stages only

For each recommendation, include:
- The specific query to target
- BOFU content type (comparison / alternatives / best-top / pricing-review)
- The competitor URL currently winning citations for that query
- An improvement potential score

Also pass the top 3 competitor URLs currently winning our BOFU queries as
competitorUrls so the plan is tailored to beating those specific pages:
- competitorUrls: ["https://competitor.com/their-comparison-page",
                   "https://other.com/alternatives-article"]

The competitorUrls parameter tells the plan to analyze those specific pages — not just the competitor brand as a whole — and generate recommendations tailored to outperforming them. This makes the roadmap significantly more actionable than a general content gap analysis.


Step 3: Research the competitive landscape for each article

Before writing a single word, you need to understand what is currently winning citations for your target query and why.

Where to find this in Superlines: Go to Analytics → Citations and filter by your target prompt to see which URLs are cited most often. Click into any cited URL to see its citation frequency by AI platform. Open the prompt in Prompts → Tracked Prompts and scroll to the Fan-Out Queries section — this shows the background searches AI models perform when answering that query, which is your article’s required subtopic list. For the full competitive gap picture, use Analytics → Competitive Benchmarking → filter by the specific prompt.

Analyze the current citation winner

I am writing a BOFU article targeting this query: 
"[your target BOFU query]"

1. Use get_top_cited_url_per_prompt for the prompt "[your target BOFU query]"
   — show the top 5 cited URLs ranked by citation frequency, with the 
   percentage of AI responses that cite each URL

2. Use get_competitive_gap for the same prompt with aiAnalysis: true — 
   what does the gap analysis say about why the top competitor is winning?
   Include competitor brand name, their visibility score vs ours, and the 
   AI-generated explanation of what makes their content effective

3. Use get_fanout_query_insights for "[your target BOFU query]" — return:
   - All background web searches AI models perform when answering this query
   - Which URLs appear most often in those background searches
   - Which of those URLs end up cited in the final response vs not cited
   This list of fan-out queries is the required subtopic list for the article

4. Use get_citation_data filtered to "[your target BOFU query]", aggregated 
   by URL — which URLs appear across multiple AI platforms vs only one?
   (Multi-platform citations = stronger authority signal)

5. Use webpage_audit on the #1 cited URL — return the AI readiness score, 
   heading structure, Schema.org types present, and the top 3 weaknesses 
   Superlines identifies in that page

Give me a competitive brief:
- What the current winner does well (structure, data, format)
- What the current winner misses (gaps I can fill)
- The complete list of fan-out queries I need to answer, grouped by subtopic
- Schema.org markup the winner uses (if any)
- My angle: what can I do differently or better?

Scrape the winning content (with Bright Data MCP)

If you have the Bright Data MCP server connected, add this to your research:

Use Bright Data to scrape the top cited URL for "[your target query]" 
and analyze:
- Heading structure (H1, H2, H3)
- Whether they use comparison tables
- How they handle FAQ sections
- Pricing data — is it specific or vague?
- Tone — neutral/balanced or promotional?
- Word count estimate
- Internal linking structure

This gives you a detailed blueprint of what to outperform.

Build the content brief

Combine the competitive research into a brief:

Based on the research for "[your target BOFU query]", create a 
detailed content brief following the BOFU 8-section structure:

Section 1 — Intro (H1 as the buyer's query):
- Write the H1 as the exact query or close variation
- Draft a 2-3 sentence opening that answers the query directly

Section 2 — TL;DR:
- 3-5 bullets covering the key recommendation
- Include at least one specific number

Section 3 — Visuals:
- Suggest what comparison graphics or screenshots to include

Section 4 — Comparison table:
- Define the columns and rows based on competitive research
- What features/criteria should be compared?

Section 5 — Tool/product overviews:
- Which tools to include (based on what AI models currently cite)
- Key data points for each

Section 6 — "Best for" section:
- Draft "Choose [tool] if you..." statements for each buyer type

Section 7 — Summary:
- Restate the top recommendation
- One neutral brand mention

Section 8 — FAQ block:
- 5 questions derived from fan-out queries
- Each should match how buyers phrase questions to AI assistants

Step 4: Write the article using data

With a complete brief in hand, the writing phase is structured rather than creative. Every section serves a specific purpose for AI extraction.

Write with AI assistance

You can write the article manually using the brief, or use your AI assistant to draft it. If using Cursor or Claude Desktop:

Write a complete BOFU article based on this brief:

[paste your content brief from Step 3]

Follow these BOFU writing rules for AI citability:

1. H1 must be the buyer's query, not a branded headline
2. First 2-3 sentences under the H1 directly answer the query — 
   state the top recommendation immediately
3. TL;DR with 3-5 bullets, at least one specific number
4. Comparison table in the first third of the article — this is 
   the most-cited element in BOFU content
5. Each tool/product overview: what it does, best use case, 
   pricing range, one genuine limitation. Under 150 words each.
6. "Best for" / "Choose this if" section — direct recommendation 
   per buyer type
7. Honest pros and cons, including for our own product — AI reduces 
   citation likelihood for content that claims to win on every 
   dimension
8. FAQ answers between 40-80 words, each able to stand alone if 
   extracted without context
9. Neutral, informative tone throughout — no superlatives, no 
   marketing buzzwords, no "in today's landscape"
10. Include real pricing numbers, not "contact for pricing"
11. At least one external statistic with inline attribution
12. Internal links to 2+ related BOFU articles in our cluster

Article type: [Best/Top | Alternatives | Comparison | Pricing/Review]
Our brand: [YourBrand]
Our position: [honest assessment of where your product fits]
Competitors to include: [list]
Target word count: [1,500-2,500 words depending on type]

The neutrality checkpoint

After drafting, run a specific check. AI models reduce citation likelihood for promotional content:

Review this BOFU article for neutrality and citation-readiness:

[paste article or reference the file]

Check for:
1. Does our product "win" on every dimension? If yes, flag the 
   sections where we need to acknowledge a limitation or trade-off
2. Are there vague claims without numbers? ("Industry-leading", 
   "best-in-class", "powerful") Replace with specifics
3. Does the comparison table show honest differences, or does our 
   product have a checkmark in every row?
4. Is the FAQ block answering real buyer questions, or restating 
   marketing messages?
5. Can each FAQ answer stand alone if extracted from the page?
6. Does the TL;DR contain at least one specific number or 
   data point?

Give me the specific edits needed to make this article more 
citation-ready.

Step 5: Optimize for AI extraction

Before publishing, validate that the article meets AI citation criteria using Superlines audit tools.

Where to find this in Superlines: Go to Tools → AI Search Checker (sometimes labeled Site Analyzer) and paste the article URL or a preview URL if your CMS supports it. For Schema.org markup generation, go to Tools → Schema Optimizer, paste the URL, and download the JSON-LD output to add to your page. Both tools work before the page is fully indexed — use a staging URL if needed.

Run the AI readiness audit

Audit this article for AI search readiness. The target URL will be: 
https://yoursite.com/blog/[article-slug]

1. Use webpage_analyze_content — check heading format, content 
   organization, data citations, tone, and writing quality
2. Check specifically for BOFU citation criteria:
   - Does the H1 match a real buyer query?
   - Is the direct answer in the first 2-3 sentences?
   - Is the comparison table in the first third?
   - Are FAQ answers 40-80 words and self-contained?
   - Is the TL;DR 3-5 bullets with specific data?
3. Use schema_optimizer to generate the Schema.org markup:
   - Article schema (with author, datePublished, dateModified)
   - FAQ schema for the FAQ block
   - Product schema if individual products are reviewed

Give me:
- An AI readiness score
- Critical fixes (must do before publishing)
- Recommended improvements
- Complete Schema.org JSON-LD to add to the page

Schema markup for BOFU content

BOFU articles should include three types of structured data. The schema_optimizer tool will generate these, but here is what to expect:

Schema typeWhen to useWhat it does for AI
ArticleEvery BOFU articleTells AI the content type, author, and freshness
FAQPageEvery article with a FAQ sectionMakes Q&A pairs directly extractable
ProductPricing and review articlesProvides structured product data (name, price, rating)
ItemListBest/top listiclesStructures the ranked list for AI extraction

Internal linking check

BOFU articles form clusters. Each article should link to related BOFU content:

  • A “Best X for Y” article links to the comparison articles for its top picks
  • An “X vs Y” comparison links to the alternatives article for each product
  • An alternatives article links to head-to-head comparisons
  • A pricing article links to the broader comparison
I have published (or plan to publish) these BOFU articles:
[list your existing BOFU articles with URLs]

For the article I just wrote about "[target query]", recommend:
- Which existing articles to link TO from this new article
- Which existing articles should be updated to link BACK to this one
- Any gaps in the cluster that need a new article to complete 
  the interlinking

Step 6: Publish, index, and start tracking

Publishing is not the end — it is where measurement begins.

Where to find this in Superlines: To add the new query to tracking, go to Prompts → Tracked Prompts → Add Prompt. Set the Intent field to Commercial and apply the BOFU labels from Step 1 (type:comparison, bofu:active, journey:decision). To add multiple related queries at once, use Prompts → Prompt Import and upload a CSV with one query per row plus label columns.

Publishing checklist

Before going live, verify:

  • Named author with bio visible on the page
  • Publish date and “last updated” date visible
  • Article, FAQ, and (if applicable) Product schema implemented
  • Alt text on all images, plus text versions of any visual tables
  • Internal links to 2+ related BOFU articles
  • No AI-generated cliches, no promotional superlatives, no em dashes
  • Comparison table is factual and honest (includes limitations)
  • FAQ answers are 40-80 words and standalone

Request indexing immediately

Do not wait for crawlers to find the article. Submit manually to both:

  1. Google Search Console — URL Inspection > paste URL > Request Indexing
  2. Bing Webmaster Tools — URL Submission > paste URL > Submit

Bing is critical because ChatGPT’s real-time search uses Bing’s index. A page not in Bing’s index may never appear in ChatGPT responses.

Add the target query to Superlines tracking

If the article’s target query is not already tracked:

Add these prompts for "YourBrand":

1. "[exact target query for this article]" 
   — label: "type:comparison" (or type:alternatives, type:best, 
   type:pricing-review)
   — label: "bofu:active"
   — intent: "commercial"

2. [any closely related queries you identified during research]

The bofu:active label makes it easy to filter all BOFU performance data later.


Step 7: Measure and iterate

AI citation results typically take 2-4 weeks to appear after publishing. Here is the measurement cadence.

Where to find this in Superlines: Go to Analytics → Overview and use the date range picker with the Compare toggle to set a before/after window. For prompt-level detail, open Prompts → Tracked Prompts, click the target prompt, and review the Visibility Trend chart. Filter your entire analytics view to prompts labeled bofu:active using the Label filter at the top of any analytics view — this scopes every metric table and chart to your BOFU portfolio only. For URL-level citation data, go to Analytics → Citations and filter by prompt.

Week 2: First check

Check the initial performance of my BOFU article targeting 
"[target query]" for "YourBrand":

1. Use analyze_metrics filtered to the target prompt — is there 
   any brand visibility score yet?
2. Use get_top_cited_url_per_prompt for this query — is our URL 
   appearing as a cited source?
3. Use get_citation_data filtered to this query — which URLs are 
   currently getting cited?

If we are not cited yet:
- What is cited instead?
- Audit the cited URL and compare to ours — what are they doing 
  that we are not?

Week 4: Full assessment

Run a full BOFU article performance assessment for "YourBrand":

1. Filter all metrics to prompts labeled "bofu:active"
2. For each BOFU article/query, show:
   - Brand visibility score
   - Citation rate
   - Which URL is cited (ours or a competitor's)
   - Sentiment when our brand is mentioned
3. Compare to the baseline from before the article was published 
   (use get_period_comparison with a 30-day window)

Categorize each article as:
- WINNING: Our URL is cited, visibility above 40
- GAINING: We appear but are not the #1 citation
- NOT YET: No visibility for this query
- LOSING: Competitor's content is clearly preferred

For each "NOT YET" and "LOSING" article, diagnose why:
- Is the page indexed? (Check if it appears in citation data at all)
- Is a competitor's page significantly better structured?
- Are we missing key information the fan-out queries look for?
- Does the page need Schema.org improvements?

Monthly: BOFU portfolio review

Run a monthly BOFU content portfolio review for "YourBrand":

1. Get all metrics for prompts labeled "bofu:active" — weekly 
   performance over the last 4 weeks
2. Get competitive gaps for BOFU prompts only — where are we 
   still behind?
3. Find new content opportunities in the decision stage — are 
   there new BOFU queries we should target?
4. Get citation data for our BOFU pages — which articles are 
   earning the most citations?

Create a BOFU portfolio scorecard:

| Article | Target Query | Visibility | Citation Rate | Status | Action |
|---------|-------------|-----------|--------------|--------|--------|

Include:
- Total BOFU articles published
- Average visibility across BOFU queries
- Month-over-month change
- Top performing article (why it works)
- Worst performing article (what to fix)
- Next 3 articles to write (based on gaps)

When an article is not getting cited: the update playbook

Not every article will earn citations on the first attempt. Here is the diagnostic and fix sequence:

My article at [URL] targeting "[query]" is not getting cited after 
4 weeks. Diagnose the issue:

1. Audit the page with webpage_audit — what is the AI readiness score?
2. Check get_top_cited_url_per_prompt — what IS getting cited instead?
3. If a competitor is winning, audit their page too — compare 
   structure, data depth, and Schema.org markup
4. Check get_fanout_query_insights — are there fan-out queries our 
   article does not answer?

Based on the diagnosis, recommend specific updates:
- New sections to add (from unanswered fan-out queries)
- Comparison table improvements
- FAQ additions or rewrites
- Schema.org fixes
- Data points or statistics to add
- Structural changes (heading hierarchy, section order)

After making updates, always update the “last modified” date on the page and resubmit for indexing.


Scaling to a BOFU content library

Individual articles are useful. A library of 30-60 interlinked BOFU articles is transformative. Here is how to scale systematically.

The monthly production rhythm

WeekActivity
Week 1Review last month’s BOFU performance (Step 7 monthly review). Identify the next 4 articles to write based on priority scores.
Week 2Research and write Article 1 (comparison) and Article 2 (alternatives). Run competitive briefs for each.
Week 3Research and write Article 3 (comparison) and Article 4 (best/top or pricing). Publish Articles 1 and 2 after optimization.
Week 4Publish Articles 3 and 4. Update 2 existing articles based on performance data. Add new tracking prompts for next month.

At this pace, you publish 4 new articles and update 2 existing ones each month. After 6 months, you have 24 new articles plus a library of updated, citation-tested content.

Build the cluster map

BOFU articles should interlink into a coherent cluster. Use Superlines data to map the cluster:

I want to plan my BOFU content cluster for "YourBrand" in the 
[your category] space.

1. Get all tracked prompts in the decision and consideration stages
2. Group them by BOFU type (best/top, alternatives, comparison, 
   pricing/review)
3. For each group, identify which prompts are covered by existing 
   content and which are gaps

Create a content cluster map showing:
- Existing articles (with URLs) and which queries they target
- Planned articles and which queries they will target
- How articles should link to each other (which articles reference 
  the same competitors or topics)
- The order to write remaining articles (fill the biggest gaps first)

The compounding effect

Each BOFU article you publish increases your coverage of buyer queries. As the library grows:

  • Topical authority compounds: AI models begin treating your domain as a reliable source for your entire category, not just individual queries.
  • Interlinking reinforces citations: When AI models see that your comparison article links to your alternatives article, which links to your pricing breakdown, it treats the cluster as a complete knowledge source.
  • New queries become easier to win: Once you are cited for “X vs Y,” related queries like “best alternatives to X” become easier because the model already trusts your domain for this category.
  • The data improves your targeting: Each month of Superlines data reveals new queries, shifting competitors, and content gaps you did not know existed.

Connecting BOFU content to pipeline

The ultimate measure of BOFU content is not citations — it is pipeline.

Direct signals

Track these in your analytics and CRM:

SignalHow to measure
AI referral trafficGA4 filter for referrals from chatgpt.com, perplexity.ai, bing.com/chat, gemini.google.com
Branded search liftGSC branded query impressions after BOFU publishing — buyers who see you in AI answers often Google your brand name next
Demo requests mentioning comparisonsAdd a field to your demo form: “How did you hear about us?” or “What were you comparing?”
BOFU page → conversion pathGA4 path analysis: visitors who land on BOFU pages → what do they do next?

Indirect signals

These are harder to measure but often larger in impact:

  • Shorter sales cycles: Buyers arrive pre-educated. They already know how you compare to competitors because AI told them.
  • Higher qualification: Prospects who found you through AI comparison queries are further along in their decision process.
  • Competitive displacement in conversations: Sales reps report that prospects reference your content or comparisons during calls.

Report it to stakeholders

Create a BOFU content impact report for "YourBrand" covering 
the last 90 days:

1. Use analyze_metrics for prompts labeled "bofu:active" — 
   brand visibility and citation rate trends
2. Use get_period_comparison with 90-day periods — how has 
   BOFU visibility changed?
3. Use get_citation_data for BOFU prompts — which of our 
   URLs are earning citations?
4. Summarize the competitive position — are we gaining or 
   losing ground on decision-stage queries?

Format as an executive report:
- Headline metric: "Our brand appears in X% of AI-generated 
  product comparisons in our category, up from Y% 90 days ago"
- Articles published and their individual performance
- Competitive movement (who gained, who lost)
- Pipeline indicators (referral traffic from AI, branded 
  search lift)
- Next quarter plan: articles to write, articles to update

Quick reference

BOFU types and when to use each

TypeTarget buyerExample queryPriority
ComparisonNarrowed to 2-3 options”Asana vs Monday.com for agencies”Highest — most commercially direct
Pricing/ReviewValidating final choice”Monday.com pricing 2026”High — final decision stage
AlternativesDissatisfied with current tool”Monday.com alternatives”Medium-high — captures switchers
Best/Top listicleExploring options with constraints”Best PM tools for remote teams”Medium — broadest reach

Superlines tools used at each step

StepMCP toolsDashboard location
Find BOFU opportunitiesfind_content_opportunities, get_query_data, get_competitive_gapAnalytics → Content Opportunities, Prompts → Prompt Radar
Prioritize articlesgenerate_strategic_action_plan, get_competitive_gap, get_fanout_query_insightsAnalytics → Competitive Benchmarking → filter by Decision stage
Research competitionget_top_cited_url_per_prompt, get_citation_data, get_fanout_query_insights, webpage_auditAnalytics → Citations → filter by prompt, Prompts → [prompt] → Fan-Out Queries
Optimize the articlewebpage_analyze_content, webpage_audit, schema_optimizerTools → AI Search Checker, Tools → Schema Optimizer
Track promptsadd_prompts, update_prompt_labelsPrompts → Tracked Prompts → Add Prompt, Prompts → Bulk Edit
Measure performanceanalyze_metrics, get_period_comparison, get_top_cited_url_per_prompt, get_citation_dataAnalytics → Overview → Compare toggle, Analytics → Citations

Tags