How to Run a GEO Audit With Claude Code in 15 Minutes
Marc-Olivier Bouchard
LLM AI Ranking Strategy Consultant

You can audit your AI search visibility in 15 minutes using Claude Code and xseek's CLI. No dashboards to click through, no spreadsheets to export. Just your terminal, two tools, and a clear picture of where your brand stands in AI-generated answers.
Most GEO audits take hours. You're pulling data from one platform, cross-referencing in another, building a spreadsheet to make sense of it all. That's because most tools were designed for browsers, not workflows.
Claude Code changes that. It's Anthropic's agentic coding tool that reads files, runs commands, and connects to external services through MCP (Model Context Protocol). Pair it with xseek's CLI and you get a setup where one AI agent pulls your visibility data, analyzes it, and tells you exactly what to fix β in a single terminal session.
Here's how to set it up and run your first audit.
What You Need Before Starting
Two things. Both install in under 3 minutes.
Claude Code β Anthropic's CLI tool. Install it with:
curl -fsSL https://claude.ai/install.sh | bashYou'll need a Claude subscription (Pro at $20/month or Max at $100/month). Claude Code runs on Claude Opus 4 and Sonnet 4, which means it can process large datasets and reason through multi-step analysis without losing context.
xseek CLI β the command-line interface for xseek, an AI visibility analytics platform. xseek tracks how often your brand gets cited by ChatGPT, Claude, Perplexity, Gemini, and other AI engines. Plans start at $99.99/month.
Install and authenticate:
npm install -g xseek
xseek login YOUR_API_KEYGet your API key from the xseek dashboard.
Once both are installed, connect them:
xseek initThis installs xseek as a set of custom skills inside Claude Code β slash commands like /find-opportunities, /track-visibility, and /optimize-page that Claude can execute directly.
Step 1: Pull Your AI Visibility Baseline
Open Claude Code in your project directory and ask it to pull your data:
claudeThen type:
Run xseek leaderboard for my site and show me where I rank vs competitorsClaude Code runs xseek leaderboard yoursite.com --format json, parses the output, and gives you a ranked table. You'll see something like:
| Rank | Brand | AI Mentions |
|---|---|---|
| 1 | SEMrush | 5,682 |
| 2 | Ahrefs | 4,744 |
| 3 | Profound | 4,620 |
| ... | ... | ... |
| 13 | Your Brand | 139 |
This is your baseline. The gap between your brand and the top-cited competitors is the ground you need to cover.
According to a 2024 study by Princeton researchers published at KDD, lower-ranked websites benefit more from GEO optimization β sites outside the top 5 in Google saw up to 115% visibility improvement when applying citation and statistics-based methods.
Step 2: Find What AI Models Search For (And Miss)
This is where the audit gets interesting. Ask Claude Code:
Run xseek web-searches for my site and group the queries by topic.
Flag any topic where we have no matching content.Claude pulls the LLM search queries β these are the actual searches that AI models like GPT-5, Gemini, and Claude make when answering user questions about your industry. It groups them by theme and cross-references against your sitemap pages.
The output looks like this:
Covered topics (you have content):
- AI robots.txt configuration β
/docs/ai-robots-txt-guide - Claude user agents β
/docs/claude-user-agents
Uncovered topics (content gaps):
- "best answer engine optimization tools 2026" β 0 matching pages
- "AI writing tools for Perplexity ranking" β 0 matching pages
- "how to write articles that get cited by AI" β 0 matching pages
Each uncovered topic is a content opportunity. AI models searched for it, found nothing from your site, and cited a competitor instead.
Step 3: See Who's Getting Cited Instead of You
Ask Claude Code:
Run xseek sources for my site and show me the top 20 competitor URLs getting cited.
Group them by domain.This reveals the specific pages that AI models cite when they skip your brand. A typical output:
| Domain | Total Citations | Top Page |
|---|---|---|
| writesonic.com | 1,018 | /blog/answer-engine-optimization-tools |
| nicklafferty.com | 588 | /blog/best-aeo-tools-answer-engine-optimization |
| rankability.com | 587 | /blog/best-ai-search-visibility-tracking-tools |
"Over $31 million has flowed into the AI visibility tools segment in the last two years," according to Rankability's 2026 analysis of the market. The competition for AI citations is real, and it's accelerating.
Now you know who you're competing against and which exact pages are winning.
Step 4: Cross-Reference With Google Search Console
Your GSC data tells you what's working in traditional search. Claude Code can compare it against your AI visibility:
Run xseek sitemap-pages for my site with 30 days of data.
Find pages with high GSC impressions but low AI impressions β and vice versa.This cross-reference surfaces three types of pages:
- Strong in both β your best content. Protect it.
- Strong in Google, weak in AI β needs GEO optimization (structured data, citations, answer-first format).
- Strong in AI, weak in Google β already optimized for AI engines. Double down on backlinks and traditional SEO.
Research from the Princeton GEO study found that combining fluency optimization with statistics addition produced a 35.8% improvement in AI visibility β the strongest result of any method combination tested.
Step 5: Get Keyword Data for Your Top Gaps
For the content gaps Claude identified, pull real Google search volume:
Run xseek keywords for my site on these topics:
"answer engine optimization tools", "AI visibility tracking", "GEO audit"Claude runs xseek keywords yoursite.com "answer engine optimization tools,AI visibility tracking,GEO audit" --format json and returns:
| Keyword | Monthly Volume | KD |
|---|---|---|
| answer engine optimization | 1,900 | 30 |
| ai visibility tools | 1,000 | 3 |
| seo automation | 2,900 | 13 |
| geo audit | 70 | 0 |
Low keyword difficulty (KD) with decent volume means you can rank for these in traditional search while also targeting AI citations. That's the GEO sweet spot.
Step 6: Generate Your Action Plan
Now tie it all together. Ask Claude Code:
Based on everything we've found, create a prioritized action plan.
Rank by business impact: what should I fix or create first?Claude Code synthesizes the leaderboard data, content gaps, competitor citations, GSC crossover, and keyword research into a single prioritized list:
- Create: "Best AEO Tools 2026" comparison page β 1,900 searches/month, KD 30, top competitors getting 500+ citations each
- Optimize: Existing robots.txt guide β high GSC impressions (11,529) but missing answer-first format and FAQ schema
- Create: "How to Write Content That AI Models Cite" guide β LLMs search for this 12+ times in your tracked data
- Fix: Add structured data (FAQPage, Article schema) to top 5 pages β immediate AI discoverability boost
Each item links back to the data that supports it. No guessing.
Why This Works Better Than Dashboard Audits
The traditional audit workflow looks like this: log into tool A, export CSV, log into tool B, export another CSV, open a spreadsheet, manually cross-reference, spend 2 hours building a report.
The Claude Code workflow: type a question, get an answer backed by live data. The AI agent does the cross-referencing, the grouping, the prioritization. You focus on decisions, not data wrangling.
"The future of search isn't links β it's answers," as Sundar Pichai, CEO of Google, put it during a keynote. The same principle applies to auditing. You shouldn't be hunting through tabs. You should be asking questions and getting answers.
Three specific advantages:
- Speed. A full audit runs in 10-15 minutes instead of 2-3 hours.
- Context. Claude Code holds your entire dataset in memory during the session. It can reference your leaderboard data while analyzing GSC queries without you re-exporting anything.
- Repeatability. Save your prompts as a Claude Code skill (a markdown file in
.claude/skills/) and run the same audit next week with one command.
Automating Recurring Audits
Once you've run the audit manually, automate it. Claude Code supports scheduled tasks β recurring prompts that run on a cron schedule.
claude /schedule "Every Monday at 9am, run a GEO audit for my site
using xseek data and post a summary to Slack"You can also use the /loop command for quick polling during a campaign:
claude /loop 6h "Check xseek leaderboard for my site and tell me
if my rank changed since last check"Weekly audits catch visibility drops before they compound. According to xseek's own tracking data, AI citation patterns shift faster than traditional search rankings β a page can go from zero citations to 500+ within days if an AI model discovers it after a crawl update.
FAQ
What is a GEO audit?
A GEO audit measures how visible your brand is in AI-generated search results from engines like ChatGPT, Perplexity, and Gemini. It identifies content gaps where competitors get cited instead of you, and produces a prioritized list of pages to create or optimize.
Can I run a GEO audit without Claude Code?
Yes β xseek's CLI works on its own, and xseek also has a web dashboard. Claude Code adds the analysis layer: it cross-references multiple data sources, groups findings by theme, and generates action plans automatically. Without it, you'll need to do that synthesis manually.
How much does this setup cost?
Claude Code requires a Claude Pro subscription ($20/month) or Max ($100/month). xseek plans start at $99.99/month for the Visibility tier. Total: roughly $120-200/month for a complete AI visibility audit stack.
What AI engines does xseek track?
xseek monitors ChatGPT, Claude, Perplexity, Gemini, and DeepSeek. It tracks brand mentions, citations, and the specific web searches these models make when generating answers.
How often should I run a GEO audit?
Weekly. AI citation patterns change faster than Google rankings. A weekly audit catches drops early and helps you spot new opportunities as AI models update their knowledge.
Does this work for any website or just SaaS companies?
It works for any website tracked in xseek. SaaS, e-commerce, content publishers, agencies β the audit process is the same. The content gaps and competitor landscape will differ by industry, but the methodology applies universally.
What's the difference between GEO and AEO?
GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) describe the same practice: optimizing content to appear in AI-generated answers. GEO is the term used in Princeton's 2024 research paper. AEO is more common in industry. Both refer to making your content citable by AI search engines.
