intermediate 20 minutes

Automate GEO Analysis and Content Creation with the Superlines MCP Server

A workflow-focused guide to automating AI search visibility analysis, content opportunity discovery, competitive intelligence, and content production using the Superlines MCP server in Claude Desktop, Cursor, Loveable, and other MCP-compatible tools.

View on GitHub
Summarise with AI:

What this guide covers

This guide teaches you how to automate the entire Generative Engine Optimization (GEO) marketing process — from monitoring your brand’s AI search performance to producing content that gets cited by AI models. It is organized around workflows, not tools. Each workflow is a sequence of prompts and actions that you can run in any MCP-compatible AI assistant.

This is not a setup guide. If you need to install and configure the Superlines MCP server, start with:

This is not a code guide. If you want to build a fully autonomous content pipeline with code, see:

This guide sits in between. It assumes you already have the Superlines MCP server connected and teaches you the actual workflows — the prompts, sequences, and patterns — that turn raw AI search data into marketing action.


Where these workflows run

The Superlines MCP server uses the standard Model Context Protocol over SSE (Server-Sent Events). Any tool that supports MCP can connect to it. Here is what that means in practice:

ToolWhat it isBest for
Claude DesktopAnthropic’s desktop app with native MCP supportConversational analysis, reports, strategy sessions
CursorAI-powered code editor with MCP supportContent creation, technical audits, building automation scripts
LoveableAI app builder with MCP supportBuilding dashboards, internal tools, marketing apps
WindsurfAI coding assistant with MCP supportContent workflows, site optimization, schema markup
Any MCP clientThe protocol is open — new tools adopt it regularlyVaries

The prompts in this guide work identically across all of these. The only difference is how each tool presents the results — Claude Desktop uses conversational responses, Cursor can write files directly, Loveable can build UI around the data.

The connection (one line)

Every MCP client needs the same SSE URL:

https://mcpsse.superlines.io?token=YOUR_SUPERLINES_API_KEY

Get your API key from Superlines Organization Settings > API Keys. It starts with sl_live_. Paste it into your tool’s MCP configuration and you are connected to 32 AI search analytics tools.


How automation works with MCP

When you type a prompt in Claude Desktop, Cursor, or any MCP client, the AI assistant reads your request, decides which Superlines tools to call, executes them, and returns the results in natural language. You do not need to know the tool names or parameters — you describe what you want and the assistant figures out the rest.

This is different from using a dashboard. Instead of navigating to a page, selecting filters, and reading charts, you describe the analysis you need in plain English and get the answer back in seconds. That difference is what makes automation possible: you can chain multiple analyses together, apply business logic, and produce deliverables — all from a single prompt.

Here is a simple example of the difference:

Dashboard approach (manual):

  1. Log in to Superlines
  2. Navigate to the Overview page
  3. Set the date range to last 30 days
  4. Note the brand visibility score
  5. Switch to the Competitors tab
  6. Compare your score to competitors
  7. Switch to the Content Opportunities page
  8. Review the list of gaps
  9. Open a spreadsheet and write up findings
  10. Format into a report

MCP approach (automated):

Analyze my brand "Acme" performance for the last 30 days. Show brand visibility 
and citation rate trends, compare them against competitors, identify the top 5 
content opportunities, and format everything as a weekly report with an executive 
summary and recommended actions.

One prompt. Same output. The AI assistant calls analyze_metrics, get_competitive_gap, find_content_opportunities, and get_next_actions behind the scenes, then assembles the results into a coherent report.


Workflow 1: Weekly brand health analysis

Goal: Produce a recurring performance report that tracks your brand’s AI search visibility over time and surfaces what changed.

When to run: Every Monday morning, or at whatever cadence your team reviews marketing metrics.

Why it matters: AI search visibility shifts week to week as models retrain, competitors publish new content, and your own pages get indexed. A weekly report catches drops early and identifies momentum you can build on.

The prompt

Run a weekly brand health analysis for "YourBrand":

1. Get weekly performance trends for the last 4 weeks — brand visibility, 
   citation rate, share of voice, and mention count
2. Compare current period metrics vs the previous period (30-day comparison) 
   — call out anything that changed by more than 10%
3. Get the analytics summary grouped by LLM service (ChatGPT, Gemini, 
   Perplexity, Claude, Copilot) — are there platforms where we are strong 
   or weak?
4. Get the top 5 best-performing prompts (queries where we score highest 
   on visibility)
5. Get the top 5 competitive gaps (queries where a competitor outperforms us)
6. Analyze sentiment — what is the overall positive/neutral/negative split?

Format the results as a weekly report with:
- Executive summary (3-4 bullet points a CMO can scan in 10 seconds)
- Performance trends table (week-over-week numbers)
- Platform breakdown (which AI models cite us most/least)
- Wins this week (best-performing queries)
- Risks this week (biggest competitive gaps)
- Sentiment snapshot
- Top 3 recommended actions for next week

What happens behind the scenes

The assistant calls these Superlines MCP tools in sequence:

  1. get_weekly_performance — 4 weeks of trend data
  2. get_period_comparison — 30-day vs prior 30-day deltas
  3. get_analytics_summary grouped by llm_service — platform breakdown
  4. get_best_performing_prompt — top queries
  5. get_competitive_gap — where competitors lead
  6. analyze_sentiment — sentiment distribution

It then synthesizes all six data pulls into a single report. In Claude Desktop, you get the report in the conversation. In Cursor, you can ask it to save the report as a markdown file. In Loveable, you could build a dashboard that displays it.

Variations

Deeper competitor focus:

Run the same weekly report but add a section that shows the top 3 competitors 
by share of voice, their week-over-week changes, and which specific queries 
they are winning that we are losing.

Platform-specific deep dive:

Run the weekly report but focus only on Perplexity performance. Show which 
queries we appear in, our average position, which URLs are cited, and how 
this compares to our performance on ChatGPT.

Save as a file (Cursor / Claude Desktop with Filesystem MCP):

Run the weekly report and save it as a markdown file named 
"weekly-report-2026-02-19.md" in my reports folder.

Workflow 2: Content opportunity discovery

Goal: Find topics where AI models are answering questions about your industry but not citing your brand, then turn those gaps into actionable content briefs.

When to run: Bi-weekly or monthly, timed to your content planning cycle.

Why it matters: Every unanswered query is a reader you are not reaching. AI models pull answers from whatever content they can find — if you do not have a page that answers the question well, a competitor’s page gets cited instead.

Phase 1: Find the gaps

Find content opportunities for "YourBrand":

1. Use find_content_opportunities to identify topics where we have high query 
   volume but low brand visibility
2. Use get_competitive_gap to find the specific prompts where competitors 
   outperform us — include AI analysis of competitor content so I can 
   understand WHY they are winning
3. Use get_fanout_query_insights to see which web searches AI models perform 
   when answering these queries — these are the exact searches that lead to 
   citations

For each opportunity, tell me:
- The query/topic
- Current brand visibility score (0-100)
- Which competitor is winning and their visibility score
- The fan-out queries AI models use (these reveal what content needs to contain)
- Estimated priority (high/medium/low based on volume and gap size)

Phase 2: Generate content briefs

Once you have the opportunities, turn them into briefs:

Take the top 3 content opportunities from the analysis and create a detailed 
content brief for each one:

For each brief include:
- Target query (the exact prompt AI models test)
- Fan-out queries to answer (the web searches AI models make)
- Recommended page title and URL slug
- Recommended H1 (format as a question that matches the target query)
- Key H2/H3 sections (each should answer a fan-out query)
- Content requirements:
  - What specific questions must be answered
  - What data points or statistics to include
  - What comparison or evaluation criteria to cover
  - What structured data (Schema.org) to add
- Competitor content to outperform (what they did well, what they missed)
- Word count estimate
- Internal pages to link to

Phase 3: Validate with a page audit

Before writing new content, check if you already have a page that could be optimized instead:

For each of the 3 content opportunities, use webpage_audit to check if we 
already have a page on our site that covers this topic. Audit these URLs:

- https://yoursite.com/page-that-might-cover-topic-1
- https://yoursite.com/page-that-might-cover-topic-2
- https://yoursite.com/page-that-might-cover-topic-3

For each audit, tell me:
- Does the existing page adequately answer the target query?
- What is the LLM-friendliness score?
- What specific changes would improve AI citability?
- Should we optimize this existing page or create a new one?

Why this three-phase approach works

Phase 1 gives you data. Phase 2 turns data into a plan. Phase 3 prevents duplicate content by checking what already exists. Most marketing teams skip Phase 3 and end up creating new pages that compete with their own existing content.


Workflow 3: Competitive intelligence

Goal: Understand exactly how competitors are winning AI citations and build a strategy to displace them.

When to run: Monthly, or whenever you notice a competitor gaining visibility in your weekly reports.

Why it matters: AI search is not a fixed pie — but the top citation positions are limited. When a competitor consistently ranks ahead of you for a given query, they are training models to prefer their content. The longer you wait, the harder it becomes to displace them.

The prompt

Run a competitive intelligence analysis for "YourBrand":

1. Get competitor insights — which brands are mentioned most often in AI 
   responses for my tracked queries, and which domains get the most citations?
2. Use get_competitive_gap with AI analysis enabled to find prompts where 
   competitors lead. For each gap, I want to understand WHY the competitor 
   is winning (content structure, data quality, schema markup, etc.)
3. Use get_top_cited_url_per_prompt to see the #1 cited URL for each tracked 
   query — which specific pages are winning?
4. Use get_citation_data aggregated by domain to show the overall citation 
   distribution — what percentage goes to each competitor?
5. Use analyze_brand_mentions grouped by brand to see how often each 
   competitor is mentioned vs. us

Organize the results as a competitive intelligence briefing:
- Competitive landscape overview (who are the top 5 competitors by AI visibility)
- Head-to-head comparison for each competitor:
  - Their strengths (queries where they beat us)
  - Their weaknesses (queries where we beat them)
  - Their most-cited URLs
- Displacement priorities (rank competitors by how much visibility we could 
  gain by outperforming them)
- Specific content actions: for each priority competitor, what content should 
  we create or optimize to win their citations?

Follow-up: Deep dive on a specific competitor

I want to focus on [Competitor Name]. They are winning citations for 
[specific query]. 

1. Use get_competitive_gap scoped to that prompt to get a full analysis
2. Audit the competitor's winning URL using webpage_audit — what makes 
   it effective for AI citation?
3. Audit our competing page using webpage_audit — where do we fall short?
4. Use schema_optimizer on both URLs — compare their structured data to ours

Give me a specific action plan to overtake them on this query, with:
- Content changes needed (what to add, remove, restructure)
- Schema markup improvements (with code snippets)
- Heading structure recommendations
- External data or citations to add
- Estimated timeline to see results

Workflow 4: Page-level optimization

Goal: Take a specific page on your site and optimize it for AI citability using data-driven recommendations.

When to run: Whenever you publish new content or identify an underperforming page in your weekly report.

Why it matters: AI models evaluate pages on specific criteria — factual accuracy, structured data, direct answers to questions, authority signals. A page can rank well in Google but never get cited by AI models if it does not meet these criteria.

Step 1: Audit the page

Run a complete AI search readiness audit on this page: 
https://yoursite.com/your-page

1. Use webpage_audit for a comprehensive LLM-friendliness analysis — content 
   quality, heading structure, citations, tone, marketing language
2. Use webpage_analyze_technical for structured data, metadata, accessibility
3. Use schema_optimizer to generate optimized Schema.org markup

Give me the results organized as:
- Overall AI readiness score
- Critical issues (must fix — these actively hurt AI citability)
- Important improvements (should fix — these would significantly help)
- Nice-to-haves (minor improvements)
- Optimized Schema.org JSON-LD code I can copy and paste

Step 2: Check which queries this page should win

Use get_query_data to find tracked queries that relate to the topic of 
this page: [describe the page topic]. Which of our tracked prompts should 
this page be answering?

Then use get_top_cited_url_per_prompt for those queries to see which URLs 
currently win. Is our page among them? If not, which competitor page is 
winning and why?

Step 3: Implement and verify

After making changes to the page based on the audit recommendations:

Re-audit this page after my changes: https://yoursite.com/your-page

1. Run webpage_audit again and compare the new score to the previous audit
2. Run webpage_analyze_technical to confirm the Schema.org markup is 
   correctly implemented
3. Check if the critical issues from the previous audit are resolved

Summarize what improved and what still needs work.

Workflow 5: End-to-end content production

Goal: Go from a data-driven content brief to a published article that is structured for AI citation, all within a single session.

When to run: Whenever you need to create a new piece of content targeting a specific AI search query.

Why it matters: Content created without AI search data in mind relies on guesswork. Content created with data — knowing exactly what queries to target, what fan-out searches to answer, what competitors wrote, and what structured data to include — has a measurably higher chance of getting cited.

Phase 1: Research

I want to write an article targeting this AI search query: 
"[your target query]"

1. Use get_query_data to understand this query — what is the intent, 
   user journey stage, and how many prompts test it?
2. Use get_fanout_query_insights to find the web searches AI models 
   make when answering this query
3. Use get_competitive_gap for this query — who is currently winning 
   and what does their content look like?
4. Use get_top_cited_url_per_prompt to find the #1 cited URL
5. Audit the top cited URL with webpage_audit — what makes it effective?

Summarize the research:
- What exactly do AI models look for when answering this query?
- What fan-out searches do they perform (these become our section headings)?
- What does the winning content do well?
- What does the winning content miss (our opportunity to be better)?

Phase 2: Outline

Based on the research, create a detailed article outline:

- Title: Format as a question or "How to" that directly matches the 
  target query
- H1: Should be the target query (or a close variation)
- H2 sections: Each one should answer a fan-out query
- H3 subsections: Break down complex answers into scannable parts
- For each section, note:
  - The specific question it answers
  - Key data points or statistics to include
  - Whether to use a comparison table, numbered list, or paragraph format
  - Any Schema.org markup relevant to this section (FAQ, HowTo, etc.)
- Meta description: Direct answer to the target query in 150-160 characters
- Recommended word count
- Internal links to include
- External sources to cite (for credibility signals)

Phase 3: Write

Write the full article based on this outline. Follow these rules for 
AI citability:

1. Open with a direct, concise answer to the target query in the first 
   paragraph — AI models often extract the opening as a citation
2. Use the H2/H3 structure from the outline exactly — each heading should 
   be a question that a fan-out query would match
3. Include specific numbers, data points, and named examples wherever 
   possible — vague statements do not get cited
4. Add comparison tables where relevant — AI models heavily cite tabular 
   data
5. Keep paragraphs short (3-4 sentences max) — AI models extract 
   passages, not pages
6. Include a FAQ section at the end using the remaining fan-out queries 
   as questions
7. Write in a neutral, informative tone — avoid promotional language, 
   superlatives, and marketing buzzwords
8. Cite external sources (industry reports, statistics) — this signals 
   authority to AI models
9. Include Schema.org FAQ markup for the FAQ section

Phase 4: Optimize

Now audit the article I just wrote:

1. Run webpage_analyze_content on the draft — check heading format, 
   content organization, data citations, tone, and writing quality
2. Generate optimized Schema.org markup with schema_optimizer for the 
   target URL where this will be published
3. Check that every fan-out query from the research phase is answered 
   somewhere in the article

Give me:
- A list of specific edits to make
- The complete Schema.org JSON-LD to add to the page
- A confidence score: how likely is this article to get cited for 
  the target query?

Why this four-phase approach produces better content

Most content teams write first and optimize later (if at all). This workflow reverses the order — you research first, structure the content around actual AI search behavior, write to match that structure, then verify the result meets the criteria. Every decision is grounded in data rather than assumptions.


Workflow 6: Prompt portfolio management

Goal: Build and maintain the set of queries you track in Superlines, ensuring you are monitoring the right questions and catching emerging opportunities.

When to run: Monthly review, plus ad-hoc additions when you publish new content or enter new markets.

Why it matters: Your tracked prompts define what you can see. If you are only tracking branded queries (“What is YourBrand?”), you miss the non-branded queries where most of the AI search volume sits (“What is the best tool for X?”). Expanding your prompt portfolio reveals new opportunities and competitors you did not know existed.

Review your current portfolio

List my current tracked prompts for "YourBrand":

1. Use get_query_data to show all tracked queries with their intent, 
   user journey stage, and category
2. Group them by intent (informational, commercial, navigational) and 
   by user journey stage (awareness, consideration, decision)
3. Identify gaps: are there journey stages or intents that have no 
   tracked queries?
4. Show which queries have the highest and lowest brand visibility — 
   are we tracking queries we are already winning, or ones where we 
   need improvement?

Summarize the portfolio health:
- Total prompts tracked
- Distribution by intent and journey stage
- Queries with visibility above 50 (strong position)
- Queries with visibility below 20 (needs work)
- Missing coverage areas

Expand with new prompts

Suggest 15 new prompts to add to my tracking portfolio for "YourBrand". 
We are in the [your industry] space and our main product/service is 
[describe what you do].

Use these criteria:
1. Cover all journey stages: awareness ("what is X"), consideration 
   ("best X tools/solutions"), decision ("X vs Y", "X pricing", "X reviews")
2. Include non-branded queries where competitors might be winning
3. Include queries that match our existing content topics
4. Include emerging queries related to trends in our industry

For each suggested prompt, explain:
- The query text
- Why it matters for our brand
- What journey stage it represents
- What intent type it is
- Which competitors likely appear for this query

Add prompts to your account

Add these new tracking prompts for "YourBrand":

[paste the list of prompts you approved from the suggestions]

For each prompt, set:
- Intent: [informational/commercial/navigational as suggested]
- Label: [category name for organization, e.g., "product", "comparison", 
  "awareness"]

The assistant calls add_prompts with the prompts you provide. New prompts start being tracked in the next crawl cycle (typically within 24-48 hours). After a few days, you will start seeing visibility data for these queries.


Chaining workflows together

The real power of MCP automation is not individual workflows — it is chaining them together into a complete marketing cycle. Here is how the workflows connect:

┌─────────────────────────────────────────────────────────────┐
│                    Weekly Health Report                       │
│                    (Workflow 1)                               │
│                                                              │
│  "Visibility dropped on 3 queries this week"                 │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│               Content Opportunity Discovery                  │
│               (Workflow 2)                                    │
│                                                              │
│  "These 3 topics have high volume but we are not cited"      │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│               Competitive Intelligence                       │
│               (Workflow 3)                                    │
│                                                              │
│  "Competitor X wins these queries — here is why"             │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│           Page-Level Optimization                            │
│           (Workflow 4)                                        │
│                                                              │
│  "Existing page needs these fixes for AI citability"         │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│           Content Production                                 │
│           (Workflow 5)                                        │
│                                                              │
│  "New article created, optimized, and ready to publish"      │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│           Portfolio Management                               │
│           (Workflow 6)                                        │
│                                                              │
│  "Added new prompts to track the queries we just targeted"   │
└─────────────────────────────────────────────────────────────┘

Example: Full-cycle prompt

You can compress the entire cycle into a single session:

I am the marketing lead for "YourBrand" (yoursite.com). Run a full GEO 
marketing cycle:

1. ANALYZE: Get my brand health for the last 30 days. Show visibility 
   trends, platform breakdown, and sentiment.

2. IDENTIFY: Find the top 3 content opportunities where we have the 
   biggest gap between query volume and brand visibility.

3. INVESTIGATE: For the #1 opportunity, run a competitive analysis — 
   who is winning, what does their content look like, and what are the 
   fan-out queries?

4. PLAN: Create a detailed content brief for a new article targeting 
   that #1 opportunity. Include heading structure, key points to cover, 
   data to include, and Schema.org recommendations.

5. WRITE: Draft the article following AI citability best practices — 
   direct answers, structured data, comparison tables, FAQ section.

6. VERIFY: Audit the draft with webpage_analyze_content. List any 
   improvements needed.

7. TRACK: Suggest 5 new prompts to add to my portfolio based on the 
   topics we uncovered.

Give me a summary at the end: what we learned, what we created, and 
what to do next week.

Platform-specific patterns

While the Superlines MCP tools work identically everywhere, each platform has strengths worth leveraging.

Claude Desktop

Best for: conversational analysis, strategic planning, reports.

Claude Desktop excels at multi-step reasoning over large datasets. It handles the analytical workflows (1-3) particularly well because it can synthesize findings across multiple tool calls into coherent narratives.

Tips:

  • Start each session with context: “I am the marketing lead for [brand]. My website is [domain].”
  • Use Projects to maintain context across sessions — create a “GEO Analysis” project and pin your brand details.
  • Add the Filesystem MCP server to save reports as files you can share with your team.
  • When Claude hits the context limit, start a new conversation and paste the summary from the previous one.

Cursor

Best for: content creation, file generation, technical optimization.

Cursor can read and write files directly in your project. This makes it ideal for content production (Workflow 5) and technical optimization (Workflow 4) because it can create markdown files, edit HTML, add Schema.org markup, and update your codebase.

Tips:

  • Use Cursor’s agent mode (Cmd+I) for multi-step workflows — it maintains context better across tool calls.
  • Ask Cursor to create content directly as files: “Write the article and save it as src/content/articles/your-article.md
  • For schema optimization, ask Cursor to add the JSON-LD directly to your page template.
  • Cursor can also run the AEO Agent as a project-level automation.

Loveable

Best for: building internal dashboards, marketing apps, visual reporting.

Loveable can build functional web applications around the MCP data. Instead of getting a text report, you can build an interactive dashboard that pulls live data from Superlines.

Tips:

  • Ask Loveable to build a “GEO Dashboard” that displays your weekly metrics with charts.
  • Create an internal tool that runs the content opportunity workflow and displays results in a sortable table.
  • Build a content brief generator that your team can use without knowing how to write prompts.

Windsurf and other MCP clients

The prompts in this guide work in any MCP-compatible tool. The only requirements are:

  1. The tool supports SSE-based MCP connections
  2. You can paste the Superlines MCP URL into its configuration
  3. The tool supports multi-step tool calling (most modern AI assistants do)

Amplifying results with additional MCP servers

The Superlines MCP server handles AI search analysis. Adding complementary MCP servers extends what you can automate.

Bright Data MCP — Scrape competitor content

Bright Data gives your AI assistant the ability to access and extract content from any webpage. Combined with Superlines competitive analysis, this lets you move from “Competitor X is winning” to “Here is exactly what their content says and how it is structured.”

Example chain:

1. Use Superlines get_top_cited_url_per_prompt to find the #1 cited URL 
   for "best project management tools"
2. Use Bright Data scrape_as_markdown to extract the full content of 
   that winning URL
3. Analyze: What heading structure do they use? What data points do they 
   include? How do they format comparisons?
4. Use these insights to write a better version

Connection: https://mcp.brightdata.com/sse?token=YOUR_BRIGHTDATA_TOKEN

DataForSEO MCP — Traditional SEO data

DataForSEO provides Google search rankings, keyword volumes, backlink profiles, and on-page analysis. Combined with Superlines, this lets you build content strategies that work for both traditional search and AI search.

Example chain:

1. Use Superlines find_content_opportunities to identify AI search gaps
2. Use DataForSEO to check Google search volume and difficulty for each 
   gap topic
3. Prioritize topics that have BOTH high AI query volume AND high Google 
   search volume — these deliver maximum ROI

Filesystem MCP — Save and accumulate reports

The Filesystem MCP server lets your AI assistant read and write files on your computer. This turns one-off analyses into a running record you can reference and share.

Example chain:

1. Run the weekly health report (Workflow 1)
2. Save it as weekly-report-2026-02-19.md
3. Next week, run the report again and ask: "Compare this week's results 
   to the report saved in weekly-report-2026-02-19.md. What changed?"

CMS MCP servers — Publish directly

If your CMS has an MCP server (Sanity, WordPress, Notion), you can go from content creation to publishing without leaving your AI assistant.

Example chain:

1. Run the full content production workflow (Workflow 5)
2. Publish the article directly to your CMS
3. Add the target query as a new tracked prompt in Superlines (Workflow 6)
4. Set a reminder to check the article's visibility in 2 weeks

From manual prompts to autonomous pipelines

The workflows in this guide are designed to be run manually — you paste a prompt, review the results, make decisions, and move to the next step. This is the right starting point because it teaches you how GEO analysis works and what good output looks like.

Once you are comfortable with the workflows, there are three paths to further automation:

Level 1: Saved prompt templates

Save your most-used prompts as templates. When it is Monday morning, open your template, paste it in, and review the output. This takes 5 minutes instead of 30.

Create a document with your standard prompts and keep it alongside your other marketing processes. Customize the prompts over time based on what works for your brand.

Level 2: Claude Projects or Cursor rules

Both Claude Desktop (via Projects) and Cursor (via rules files) support persistent instructions. You can configure them to always know your brand name, website, competitors, and preferred report format — so you do not need to repeat that context every time.

Claude Desktop Project instruction example:

You are a GEO marketing analyst for "Acme Corp" (acme.com). 
Our main competitors are CompetitorA, CompetitorB, and CompetitorC. 
When I ask for a report, always use the Superlines MCP server and 
format results with an executive summary, data tables, and recommended 
actions. Always specify our brand name when calling Superlines tools.

Cursor rules file example (.cursor/rules/geo-analyst.mdc):

When working on GEO analysis, always use the Superlines MCP server 
with brand name "Acme Corp". Save all reports to the /reports directory 
as markdown files with date-stamped filenames.

Level 3: Autonomous agent (AEO Agent)

For fully hands-off automation, the Superlines AEO Agent runs a 7-phase pipeline that performs analysis, competitive research, content auditing, fact-checking, and content creation on a schedule. It uses the same Superlines MCP tools described in this guide, but wrapped in an agent framework (Mastra) that executes them autonomously.

The AEO Agent is the natural endpoint of the journey:

  1. Start here — Manual prompts in Claude Desktop, Cursor, or Loveable
  2. Optimize — Saved templates + persistent context
  3. Automate — AEO Agent running on a daily schedule

Each level builds on the previous one. You do not need to jump to Level 3 immediately — the manual workflows in this guide already save hours of marketing time every week.


Quick reference: Superlines MCP tools by workflow

WorkflowPrimary tools used
Weekly health reportget_weekly_performance, get_period_comparison, get_analytics_summary, get_best_performing_prompt, get_competitive_gap, analyze_sentiment
Content opportunitiesfind_content_opportunities, get_competitive_gap, get_fanout_query_insights, webpage_audit
Competitive intelligenceget_competitor_insights, get_competitive_gap, get_top_cited_url_per_prompt, get_citation_data, analyze_brand_mentions
Page optimizationwebpage_audit, webpage_analyze_technical, webpage_analyze_content, schema_optimizer
Content productionget_query_data, get_fanout_query_insights, get_competitive_gap, get_top_cited_url_per_prompt, webpage_audit, webpage_analyze_content, schema_optimizer
Portfolio managementget_query_data, add_prompts, update_prompt_labels

Tags