rHow to Create an AI Agent Skill for Automated SEO Auditing & Fixes
Compatible with Claude Code, Claude.ai, and any agent supporting the Agent Skills specification
SEOgent provides a powerful CLI and API that AI agents can use to perform comprehensive SEO audits on websites. By creating a custom skill, you give your AI coding assistant the ability to scan any site for SEO issues, review the results, and apply fixes directly in your codebase — all from within your normal development workflow.
This guide walks you through creating a skill that integrates the SEOgent CLI with your AI agent. While the examples focus on Claude (via Claude Code and Claude.ai), the same approach works with any agent that supports custom skills or tool definitions.
Want to skip the setup? Download the SEOgent SEO Skill — a ready-made skill package you can drop into your project and start scanning immediately.
By the end of this guide, you'll have a skill that allows your agent to:
npm install -g seogentAgent skills are reusable, self-contained instruction sets that teach an AI agent how to perform specific tasks. Think of them as portable expertise: instead of explaining your workflow to Claude every time, you package the instructions once and the agent loads them automatically when relevant.
Every skill consists of a directory containing at minimum a SKILL.md file. This file has two parts:
--- markers that tells the agent what the skill does and when to activate it. The agent scans this frontmatter for every available skill to decide relevance.Skills can also include supporting files like executable scripts, templates, and configuration files alongside the SKILL.md.
Depending on your platform, skills are stored in different locations:
| Platform | Skill Location |
|---|---|
| Claude Code (personal) | ~/.claude/skills/seogent-seo/ |
| Claude Code (project) | .claude/skills/seogent-seo/ |
| Claude.ai | Upload as ZIP via Settings > Capabilities |
| API | POST to /v1/skills endpoint |
Before creating the skill, install the SEOgent CLI and configure your API key.
npm install -g seogent
Verify the installation:
seogent --version
Authenticate with your API token. The CLI saves it to ~/.seogent/config.json:
seogent auth <your-api-token>
You can also authenticate per-command or via environment variable:
# Per-command
seogent scan example.com --api-key sk_your_api_key_here
# Environment variable
export SEOGENT_API_KEY=sk_your_api_key_here
Manage your stored credentials:
# Check which key is configured
seogent auth --show
# Remove saved key
seogent auth --remove
Run a quick scan to make sure everything is working:
seogent scan example.com --quiet
This returns a scan_id. Check progress with seogent status <scan_id> and retrieve results with seogent results <scan_id>. If the results include issues and scores, you're ready to build the skill.
The skill will use these CLI commands. Understanding them will help you customize the skill for your specific workflow.
All commands output structured JSON to stdout, making it easy to pipe into jq, scripts, or AI agents. Human-readable messages (progress, errors) go to stderr and can be silenced with --quiet.
seogent scan <url>Initiates an SEO scan on the target URL. Returns a scan ID for tracking.
seogent scan example.com
--urls <url> Additional URLs to scan (repeatable)
--mode <mode> Crawl mode: discover (default, follows sitemap/links)
--max-pages <number> Maximum pages to scan (up to 10,000)
--performance Include Core Web Vitals / performance metrics
--link-check Check for dead links and broken images
--webhook <url> URL to POST results to when scan completes
--quiet Suppress non-JSON stderr output
Examples:
# Scan a domain (discovers pages automatically)
seogent scan example.com
# Scan specific URLs only
seogent scan https://example.com/page --urls https://example.com/other
# Set crawl mode and limit pages
seogent scan example.com --mode discover --max-pages 50
# Include performance metrics (Core Web Vitals)
seogent scan example.com --performance
# Check for dead links and broken images
seogent scan example.com --link-check
# Combine performance and link checking
seogent scan example.com --performance --link-check
seogent status <scan_id>Check the status and progress of a running scan.
seogent status scan_abc123
seogent results <scan_id>Retrieve the full results of a completed scan.
seogent results <scan_id>
--issues-only Only return pages with issues
--min-severity <level> Filter: critical | high | medium | low
--per-page <number> Results per page (for pagination)
--cursor <cursor> Cursor for next page of results
Examples:
# Full results
seogent results scan_abc123
# Only pages with issues
seogent results scan_abc123 --issues-only
# Filter by severity
seogent results scan_abc123 --min-severity high
# Paginate through large result sets
seogent results scan_abc123 --per-page 50 --cursor <next_cursor>
seogent cancel <scan_id>Cancel a running scan.
seogent cancel scan_abc123
# List your scans (paginated)
seogent scans
seogent scans --page 2
# List your scanned domains
seogent domains
# Check credit balance
seogent credits
| Flag | Description |
|---|---|
-k, --api-key <key> |
API key (overrides env and config) |
-u, --api-url <url> |
API base URL (default: https://seogent.ai) |
-q, --quiet |
Suppress non-JSON stderr output |
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | General error |
| 2 | Authentication error (missing or invalid key) |
| 3 | Not found |
| 4 | Rate limited |
Now let's build the skill step by step. We'll create a directory structure, write the SKILL.md, and add supporting scripts.
seogent-seo/
├── SKILL.md # Core instructions
├── scan.sh # Scanning helper script
└── examples/
├── sample-output.json
└── fix-workflow.md
This is the heart of your skill. Here's the complete file:
---
name: seogent-seo
description: >
Run SEO audits using the SEOgent CLI. Use when the user asks to
check SEO, audit a website, find SEO issues, fix meta tags,
optimize images, check structured data, or improve search rankings.
Handles scanning, polling for results, and applying fixes.
---
# SEOgent SEO Scanner Skill
## Overview
This skill uses the SEOgent CLI to perform comprehensive SEO audits
on websites and guide the agent through fixing discovered issues.
## Reference Files
This skill directory contains additional resources — consult these
before your first scan:
- **`scan.sh`** — Ready-to-run shell script that handles the full
scan → poll → results workflow automatically. Use as a reference
for correct flag usage, status checking, and polling patterns.
- **`examples/sample-output.json`** — Realistic example of the full
JSON response structure. Reference this to understand the shape of
results before parsing real output.
- **`examples/fix-workflow.md`** — Step-by-step walkthrough of fixing
site-wide vs page-specific issues with concrete CLI examples.
## Prerequisites
- SEOgent CLI installed: `npm install -g seogent`
- API key set via `seogent auth <token>` or `export SEOGENT_API_KEY=sk_...`
## Scanning Workflow (3 steps)
Every scan follows this pattern. Run each step as a **separate command**.
### Step 1: Start the scan
```bash
seogent scan <domain> --quiet
This returns JSON immediately with a scan_id. Extract it:
{ "scan_id": "xxx", "status": "pending", ... }
Variations:
# Scan specific URLs
seogent scan <url1> --urls <url2> --urls <url3> --quiet
# Include performance metrics (Core Web Vitals)
seogent scan <domain> --performance --quiet
# Check for dead links and broken images
seogent scan <domain> --link-check --quiet
# Limit discovered pages
seogent scan <domain> --max-pages 50 --quiet
The most efficient pattern is to combine a wait with the status check:
sleep 15 && seogent status <scan_id> --quiet
This waits 15 seconds then polls in a single command.
Check the status field in the response:
"pending", "crawling", or "analyzing" → run the combined sleep+status command again"completed" → move to step 3"failed" → report error to userDo NOT use a bash while-loop — run each poll as its own command so you can read the output.
seogent results <scan_id> --quiet
# Only pages with issues
seogent results <scan_id> --issues-only --quiet
# Only high-severity and above
seogent results <scan_id> --min-severity high --quiet
# Paginate large result set
seogent results <scan_id> --per-page 50 --cursor <next_cursor> --quiet
Results for sites with 30+ pages are typically 200KB–600KB+ and will exceed context limits if read directly.
When results are large:
subagent_type: "general-purpose". Give it the file path and a
detailed prompt asking it to:
average_score and summary breakdown (excellent/good/needs_work/poor)site_checks for site-wide config issuesAll output is JSON on stdout. The full structure is:
{
average_score, // Overall SEO score (0-100)
summary {
excellent, // Count of pages scoring 90+
good, // Count of pages scoring 70-89
needs_work, // Count of pages scoring 50-69
poor // Count of pages scoring 0-49
},
site_checks {
checks[{
key, // e.g. "robots_txt", "sitemap"
name, // Human-readable name
status, // "passed" | "failed" | "warning"
message, // Description of the finding
category // e.g. "crawlability", "indexability"
}],
duplicate_titles {
count, found,
duplicates[] // Groups of pages sharing the same title
},
duplicate_descriptions {
count, found,
duplicates[] // Groups of pages sharing the same description
}
},
top_issues[{
issue, // Human-readable issue description
count // Number of pages affected
}],
results {
data[{
url, // Page URL
score, // Page score (0-100)
grade, // Letter grade (A, B, C, D, F)
failed_checks[], // Strings — things that failed
warnings[], // Strings — things that need attention
all_checks[{
name, key, status, // Same fields as site_checks
message, category, weight // Plus weight for scoring impact
}]
}],
next_cursor, // For pagination
prev_cursor,
per_page
}
}
Before jumping to fixes, follow this process to interpret results:
Check site_checks first — these are site-wide config issues like
missing robots.txt, missing sitemap, duplicate titles/descriptions.
These are the highest-leverage fixes.
Identify baseline pages — find pages you expect to be "clean" (e.g., the homepage or a well-maintained product page). If those pages still have warnings, those issues are likely unintentional template-level bugs.
Categorize by scope:
Check for intentional issues — if the project documents deliberate SEO gaps (e.g., a test/demo site with known issues), cross-reference before fixing. Don't "fix" things that are intentionally configured that way.
Prioritize by impact — fix site-wide issues first (highest blast radius), then page-specific issues sorted by page importance/traffic.
The SEOgent CLI identifies issues and provides fix recommendations. The agent's job is to read these recommendations and apply fixes directly in the codebase:
fix field for guidanceThese fixes consistently produce the largest score improvements, listed in rough priority order:
Canonical tags — add self-referencing <link rel="canonical"> tags
to the base template. Generate the URL from the request URI so each page
gets its own canonical automatically.
Open Graph tags — add og:title, og:description, og:image to
the base template <head>. Use the page's existing title/description
meta values so they stay in sync.
JSON-LD structured data — add schema.org markup:
Product schema for product pagesArticle schema for blog postsOrganization schema for the company/about pageHeading hierarchy — ensure headings follow H1 → H2 → H3 with no
skipped levels. Common mistake: using <h4> or <h5> in footers or
navigation. Use <p>, <span>, or styled <div> elements instead.
Image lazy loading — add loading="lazy" to below-fold images.
Do NOT add it to hero images or anything above the fold — lazy loading
above-fold content hurts perceived performance.
robots.txt + sitemap.xml — ensure both exist and are accessible.
Title length — titles under 30 characters are flagged. Pad short titles with the brand name or descriptors (e.g., "Products" → "Products | Acme — Quality Widgets").
After applying fixes, re-scan only the specific pages that had issues to verify the fixes were applied correctly. This is faster and more efficient than re-scanning the entire site.
Run the initial full-site scan (3-step async flow above).
Extract the URLs that had issues from the results JSON.
Apply fixes to the relevant source files in the codebase.
Re-scan only the affected URLs:
seogent scan https://example.com/page-with-issues \
--urls https://example.com/another-page \
--urls https://example.com/third-page \
--quiet
Poll status, then fetch results (steps 2-3 of scanning workflow).
If issues remain, repeat steps 3-5 until clean.
seogent scan <domain> --quietsleep 15 && seogent status <scan_id> --quiet (repeat until completed)seogent results <scan_id> --quietInclude Core Web Vitals in the scan:
seogent scan <domain> --performance --quiet
Check for broken links and missing images across the site:
seogent scan <domain> --link-check --quiet
When presenting scan results:
Summary table — show overall score, grade breakdown (excellent/good/needs_work/poor counts), and before/after comparison if this is a re-scan.
Site-level issues first — report findings from site_checks
(robots.txt, sitemap, duplicate titles/descriptions). These affect
the whole site and are highest priority.
Group page issues by scope:
Note blast radius — for each fix, state how many pages it will improve (e.g., "Adding canonical tags to the base template fixes this on all 32 pages").
Always present the plan and ask before making changes — don't start editing files without user approval.
After fixes, re-scan and compare — show a before/after score comparison to demonstrate improvement.
Things learned the hard way:
Results can include 404 pages — the scanner discovers pages via internal links. If broken links exist, those 404 pages appear in results with low scores. These inflate "needs work" counts but aren't fixable SEO pages — fix the broken links instead.
Large results exceed context — sites with 30+ pages produce 200KB–600KB of JSON. Always delegate large result analysis to a subagent (see "Handling large results" above).
---
## Adding the Scanning Helper Script
For more complex workflows, add a helper script that wraps the polling logic. Create `scan.sh` in your skill directory:
```bash
#!/bin/bash
# scan.sh - SEOgent scan with automatic polling
# Usage: ./scan.sh <url> [max-pages]
URL=$1
MAX_PAGES=${2:-100}
if [ -z "$URL" ]; then
echo "Usage: ./scan.sh <url> [max-pages]"
exit 1
fi
if [ -z "$SEOGENT_API_KEY" ]; then
# Check if auth is configured in config file
if ! seogent auth --show > /dev/null 2>&1; then
echo "Error: No API key configured. Run 'seogent auth <token>' or set SEOGENT_API_KEY"
exit 2
fi
fi
# Start scan (--quiet for JSON output)
echo "Starting scan of $URL (max $MAX_PAGES pages)..." >&2
RESULT=$(seogent scan "$URL" --max-pages $MAX_PAGES --quiet)
SCAN_ID=$(echo $RESULT | jq -r '.scan_id')
if [ -z "$SCAN_ID" ] || [ "$SCAN_ID" = "null" ]; then
echo "Failed to start scan" >&2
echo "$RESULT" >&2
exit 1
fi
echo "Scan ID: $SCAN_ID" >&2
# Poll for completion
while true; do
sleep 15
STATUS_JSON=$(seogent status $SCAN_ID --quiet)
STATUS=$(echo $STATUS_JSON | jq -r '.status')
echo "Status: $STATUS" >&2
if [ "$STATUS" = "completed" ]; then
seogent results $SCAN_ID --quiet
exit 0
fi
if [ "$STATUS" = "failed" ]; then
echo "Scan failed" >&2
echo "$STATUS_JSON" >&2
exit 1
fi
# "pending", "crawling", or "analyzing" — loop continues
done
The examples/ directory gives your agent reference material it can consult without running a real scan. These files improve first-scan accuracy and help the agent understand the response structure upfront.
A realistic example of the full JSON response structure. The agent can reference this to understand the shape of results before parsing real output:
{
"scan_id": "scan_abc123def456",
"status": "completed",
"url": "https://example.com",
"pages_scanned": 32,
"average_score": 74,
"summary": {
"excellent": 2,
"good": 12,
"needs_work": 15,
"poor": 3
},
"site_checks": {
"checks": [
{
"key": "robots_txt",
"name": "robots.txt",
"status": "passed",
"message": "robots.txt is accessible and valid",
"category": "crawlability"
},
{
"key": "sitemap",
"name": "XML Sitemap",
"status": "failed",
"message": "No sitemap.xml found at /sitemap.xml",
"category": "crawlability"
}
],
"duplicate_titles": {
"count": 2,
"found": true,
"duplicates": [
{
"title": "Example Site",
"pages": [
"https://example.com/about",
"https://example.com/contact"
]
}
]
},
"duplicate_descriptions": {
"count": 0,
"found": false,
"duplicates": []
}
},
"top_issues": [
{ "issue": "Missing canonical tag", "count": 28 },
{ "issue": "Missing Open Graph tags", "count": 30 },
{ "issue": "No structured data found", "count": 25 },
{ "issue": "Heading hierarchy skipped (H1 to H4)", "count": 18 },
{ "issue": "Images missing lazy loading", "count": 12 },
{ "issue": "Title too short (under 30 characters)", "count": 8 }
],
"results": {
"data": [
{
"url": "https://example.com/",
"score": 82,
"grade": "B",
"failed_checks": [
"No structured data found",
"Missing Open Graph tags"
],
"warnings": [
"Title could be longer (28 characters, recommend 30-60)"
],
"all_checks": [
{
"name": "Meta Title",
"key": "meta_title",
"status": "warning",
"message": "Title is 28 characters. Recommended: 30-60.",
"category": "meta",
"weight": 5
},
{
"name": "Canonical Tag",
"key": "canonical",
"status": "failed",
"message": "No canonical tag found. Add a self-referencing canonical.",
"category": "indexability",
"weight": 8
}
]
}
],
"next_cursor": "eyJwYWdlIjoyLCJsaW1pdCI6NTB9",
"prev_cursor": null,
"per_page": 50
}
}
A step-by-step walkthrough of fixing site-wide vs page-specific issues. Include concrete CLI examples for identifying scope, applying fixes, and verifying with targeted re-scans. See the pre-built skill download for the full file, or create your own based on the workflows described in the SKILL.md.
For personal use across all projects, copy the skill directory to your personal skills folder:
mkdir -p ~/.claude/skills/
cp -r seogent-seo/ ~/.claude/skills/seogent-seo/
For a specific project, place it in your project's .claude directory:
mkdir -p .claude/skills/
cp -r seogent-seo/ .claude/skills/seogent-seo/
Claude Code detects skill changes live, so you can edit the SKILL.md during a session without restarting.
For use in the Claude.ai web or mobile app:
Note: Skills in Claude.ai require Code Execution to be enabled. Make sure this is toggled on in your Capabilities settings.
If you're building a custom agent using the Anthropic API with code execution, you can provision skills programmatically. Refer to the Skills API Quickstart in Anthropic's documentation for implementation details.
Once installed, the skill activates automatically when you mention SEO-related tasks. Here are some examples of how it works in practice.
User: "Check the SEO on https://mysite.com"
Claude: I'll run a full SEO scan on mysite.com now.
[Executes: seogent scan mysite.com --quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]
Scan complete! Here's what I found:
SEO Score: 72/100
Critical Issues (4):
1. Missing meta descriptions on 8 pages
2. 3 images over 1MB without optimization
3. No structured data on product pages
4. Broken canonical tags on /blog/*
Warnings (11):
- Duplicate title tags on 5 pages
- Missing alt text on 23 images
- Slow LCP (3.8s) on homepage
...
Want me to fix any of these? I can start with the critical issues.
User: "Fix the missing meta descriptions"
Claude: I'll fix the meta descriptions. Let me locate the relevant
templates in your codebase.
[Agent reads the scan results to identify the 8 affected URLs,
finds the corresponding source files, and adds meta descriptions]
Done! I've added meta descriptions to all 8 pages. Now let me
verify the fixes by re-scanning just those URLs...
[Executes: seogent scan https://mysite.com/about \
--urls https://mysite.com/pricing \
--urls https://mysite.com/blog/getting-started \
--urls https://mysite.com/blog/seo-tips \
--urls https://mysite.com/features \
--urls https://mysite.com/contact \
--urls https://mysite.com/docs \
--urls https://mysite.com/changelog \
--quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]
Verification scan complete! All 8 pages now have meta descriptions.
No remaining issues on these URLs. Score for these pages: 94/100.
Want me to tackle the image optimization next?
User: "Re-scan the whole site to see the overall improvement"
Claude: Running a full site scan to compare...
[Executes: seogent scan mysite.com --quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]
Score improved from 72 to 84!
Resolved:
- Meta descriptions now present on all pages
- 3 of 4 critical issues fixed
Remaining:
- Images still need optimization (3 files > 1MB)
- Structured data still missing on product pages
Want me to tackle the images next?
User: "Add an SEO check to our GitHub Actions pipeline"
Claude: I'll add a step that fails the build if critical SEO issues
are found.
[Agent creates/edits .github/workflows/seo-check.yml]
- name: SEO audit
env:
SEOGENT_API_KEY: ${{ secrets.SEOGENT_API_KEY }}
run: |
SCAN_ID=$(npx seogent scan ${{ env.SITE_URL }} --quiet | jq -r '.scan_id')
while true; do
sleep 15
STATUS=$(npx seogent status $SCAN_ID --quiet | jq -r '.status')
[ "$STATUS" = "completed" ] && break
[ "$STATUS" = "failed" ] && echo "Scan failed" && exit 1
done
npx seogent results $SCAN_ID --quiet > results.json
HIGH=$(cat results.json | jq '[.top_issues[] | select(.severity == "critical" or .severity == "high")] | length')
if [ "$HIGH" -gt 0 ]; then echo "$HIGH critical/high issues found" && exit 1; fi
The base skill covers general SEO auditing, but you can tailor it to your specific needs.
If Core Web Vitals are important to you, add instructions to always include performance:
## Default Behavior
When scanning, always include --performance unless the user specifies
otherwise. Flag any LCP > 2.5s, CLS > 0.1, or INP > 200ms as
high-priority issues.
Add context about your tech stack so the agent generates appropriate fixes:
## Project Context
This project uses Craft CMS with Twig templates.
- Meta tags are set in templates/_layouts/base.twig
- SEO fields use the SEOmatic plugin
- Images are served via Imager-X transforms
When suggesting fixes:
- Use SEOmatic field syntax, not raw HTML meta tags
- Reference Imager-X for image optimization suggestions
- Twig template paths follow templates/{section}/{entry}.twig
Skills compose naturally. If you have a deployment skill, the agent can scan, fix, and deploy in one workflow:
User: "Audit the SEO, fix critical issues, and deploy"
Claude uses seogent-seo skill for scanning and fixes,
then uses deploy skill to push changes to production.
While this guide focuses on Claude, the SEOgent CLI works with any agent that can execute shell commands. Here's how to adapt the skill for other platforms.
Create a custom action that wraps the CLI commands. Define the scan, status, and results endpoints in your OpenAPI spec, then add instructions in the GPT's system prompt that mirror the workflows section of the SKILL.md.
Most IDE-based agents support custom rules or instructions files. Place the SKILL.md content in your project's agent configuration file (e.g., .cursorrules, .windsurfrules) or reference it from your project's context documentation.
For agents that support the Model Context Protocol (MCP), you can wrap the SEOgent CLI as an MCP server. This exposes the scan, status, results, and cancel commands as structured tools that any MCP-compatible agent can discover and call.
Agent Skills Specification: The SKILL.md format follows the open Agent Skills specification at agentskills.io, designed for portability across platforms. Skills you create for Claude can be adapted for other agents with minimal changes.
CLI not found: Make sure seogent is installed globally and your PATH includes the npm global bin directory. Run npm bin -g to check.
Authentication errors: Verify your API key is set correctly. Run seogent auth --show to check the stored key, or echo $SEOGENT_API_KEY to verify the environment variable. Exit code 2 indicates an authentication problem.
Scan timeouts: Large sites can take time. Use --max-pages to limit the crawl scope for initial testing. Use the async workflow (scan → status → results) and poll until complete.
Rate limits: SEOgent enforces rate limits based on your plan. Exit code 4 indicates you've been rate limited. Check your remaining credits with seogent credits.
Skill not activating: Check that your SKILL.md frontmatter includes a descriptive description field with relevant trigger words. The agent matches your request against this description to decide whether to load the skill.
Don't want to build it from scratch? Download the SEOgent SEO Skill and drop it into ~/.claude/skills/ (personal) or .claude/skills/ (project). It includes the SKILL.md and scanning helper script — ready to use out of the box.