← Back to Documentation

SEOgent Agent Skill Guide for AI Agents

rHow to Create an AI Agent Skill for Automated SEO Auditing & Fixes

Compatible with Claude Code, Claude.ai, and any agent supporting the Agent Skills specification


Introduction

SEOgent provides a powerful CLI and API that AI agents can use to perform comprehensive SEO audits on websites. By creating a custom skill, you give your AI coding assistant the ability to scan any site for SEO issues, review the results, and apply fixes directly in your codebase — all from within your normal development workflow.

This guide walks you through creating a skill that integrates the SEOgent CLI with your AI agent. While the examples focus on Claude (via Claude Code and Claude.ai), the same approach works with any agent that supports custom skills or tool definitions.

Want to skip the setup? Download the SEOgent SEO Skill — a ready-made skill package you can drop into your project and start scanning immediately.

What You'll Build

By the end of this guide, you'll have a skill that allows your agent to:

Prerequisites


Understanding Agent Skills

Agent skills are reusable, self-contained instruction sets that teach an AI agent how to perform specific tasks. Think of them as portable expertise: instead of explaining your workflow to Claude every time, you package the instructions once and the agent loads them automatically when relevant.

How Skills Work

Every skill consists of a directory containing at minimum a SKILL.md file. This file has two parts:

  1. YAML frontmatter — metadata between --- markers that tells the agent what the skill does and when to activate it. The agent scans this frontmatter for every available skill to decide relevance.
  2. Markdown content — the actual instructions the agent follows when the skill is invoked. This can include step-by-step procedures, code examples, API reference, and expected output formats.

Skills can also include supporting files like executable scripts, templates, and configuration files alongside the SKILL.md.

Where Skills Live

Depending on your platform, skills are stored in different locations:

Platform Skill Location
Claude Code (personal) ~/.claude/skills/seogent-seo/
Claude Code (project) .claude/skills/seogent-seo/
Claude.ai Upload as ZIP via Settings > Capabilities
API POST to /v1/skills endpoint

Installing the SEOgent CLI

Before creating the skill, install the SEOgent CLI and configure your API key.

Installation

npm install -g seogent

Verify the installation:

seogent --version

Authentication

Authenticate with your API token. The CLI saves it to ~/.seogent/config.json:

seogent auth <your-api-token>

You can also authenticate per-command or via environment variable:

# Per-command
seogent scan example.com --api-key sk_your_api_key_here

# Environment variable
export SEOGENT_API_KEY=sk_your_api_key_here

Manage your stored credentials:

# Check which key is configured
seogent auth --show

# Remove saved key
seogent auth --remove

Quick Test

Run a quick scan to make sure everything is working:

seogent scan example.com --quiet

This returns a scan_id. Check progress with seogent status <scan_id> and retrieve results with seogent results <scan_id>. If the results include issues and scores, you're ready to build the skill.


SEOgent CLI Command Reference

The skill will use these CLI commands. Understanding them will help you customize the skill for your specific workflow.

Output Format

All commands output structured JSON to stdout, making it easy to pipe into jq, scripts, or AI agents. Human-readable messages (progress, errors) go to stderr and can be silenced with --quiet.

Core Commands

seogent scan <url>

Initiates an SEO scan on the target URL. Returns a scan ID for tracking.

seogent scan example.com
  --urls <url>           Additional URLs to scan (repeatable)
  --mode <mode>          Crawl mode: discover (default, follows sitemap/links)
  --max-pages <number>   Maximum pages to scan (up to 10,000)
  --performance          Include Core Web Vitals / performance metrics
  --link-check           Check for dead links and broken images
  --webhook <url>        URL to POST results to when scan completes
  --quiet                Suppress non-JSON stderr output

Examples:

# Scan a domain (discovers pages automatically)
seogent scan example.com

# Scan specific URLs only
seogent scan https://example.com/page --urls https://example.com/other

# Set crawl mode and limit pages
seogent scan example.com --mode discover --max-pages 50

# Include performance metrics (Core Web Vitals)
seogent scan example.com --performance

# Check for dead links and broken images
seogent scan example.com --link-check

# Combine performance and link checking
seogent scan example.com --performance --link-check

seogent status <scan_id>

Check the status and progress of a running scan.

seogent status scan_abc123

seogent results <scan_id>

Retrieve the full results of a completed scan.

seogent results <scan_id>
  --issues-only          Only return pages with issues
  --min-severity <level> Filter: critical | high | medium | low
  --per-page <number>    Results per page (for pagination)
  --cursor <cursor>      Cursor for next page of results

Examples:

# Full results
seogent results scan_abc123

# Only pages with issues
seogent results scan_abc123 --issues-only

# Filter by severity
seogent results scan_abc123 --min-severity high

# Paginate through large result sets
seogent results scan_abc123 --per-page 50 --cursor <next_cursor>

seogent cancel <scan_id>

Cancel a running scan.

seogent cancel scan_abc123

Account Commands

# List your scans (paginated)
seogent scans
seogent scans --page 2

# List your scanned domains
seogent domains

# Check credit balance
seogent credits

Global Options

Flag Description
-k, --api-key <key> API key (overrides env and config)
-u, --api-url <url> API base URL (default: https://seogent.ai)
-q, --quiet Suppress non-JSON stderr output

Exit Codes

Code Meaning
0 Success
1 General error
2 Authentication error (missing or invalid key)
3 Not found
4 Rate limited

Creating the SEOgent Skill

Now let's build the skill step by step. We'll create a directory structure, write the SKILL.md, and add supporting scripts.

Directory Structure

seogent-seo/
├── SKILL.md           # Core instructions
├── scan.sh            # Scanning helper script
└── examples/
    ├── sample-output.json
    └── fix-workflow.md

The SKILL.md File

This is the heart of your skill. Here's the complete file:

---
name: seogent-seo
description: >
  Run SEO audits using the SEOgent CLI. Use when the user asks to
  check SEO, audit a website, find SEO issues, fix meta tags,
  optimize images, check structured data, or improve search rankings.
  Handles scanning, polling for results, and applying fixes.
---

# SEOgent SEO Scanner Skill

## Overview
This skill uses the SEOgent CLI to perform comprehensive SEO audits
on websites and guide the agent through fixing discovered issues.

## Reference Files
This skill directory contains additional resources — consult these
before your first scan:
- **`scan.sh`** — Ready-to-run shell script that handles the full
  scan → poll → results workflow automatically. Use as a reference
  for correct flag usage, status checking, and polling patterns.
- **`examples/sample-output.json`** — Realistic example of the full
  JSON response structure. Reference this to understand the shape of
  results before parsing real output.
- **`examples/fix-workflow.md`** — Step-by-step walkthrough of fixing
  site-wide vs page-specific issues with concrete CLI examples.

## Prerequisites
- SEOgent CLI installed: `npm install -g seogent`
- API key set via `seogent auth <token>` or `export SEOGENT_API_KEY=sk_...`

## Scanning Workflow (3 steps)

Every scan follows this pattern. Run each step as a **separate command**.

### Step 1: Start the scan
```bash
seogent scan <domain> --quiet

This returns JSON immediately with a scan_id. Extract it:

{ "scan_id": "xxx", "status": "pending", ... }

Variations:

# Scan specific URLs
seogent scan <url1> --urls <url2> --urls <url3> --quiet

# Include performance metrics (Core Web Vitals)
seogent scan <domain> --performance --quiet

# Check for dead links and broken images
seogent scan <domain> --link-check --quiet

# Limit discovered pages
seogent scan <domain> --max-pages 50 --quiet

Step 2: Poll for completion

The most efficient pattern is to combine a wait with the status check:

sleep 15 && seogent status <scan_id> --quiet

This waits 15 seconds then polls in a single command.

Check the status field in the response:

Do NOT use a bash while-loop — run each poll as its own command so you can read the output.

Step 3: Fetch results

seogent results <scan_id> --quiet

Filtering results

# Only pages with issues
seogent results <scan_id> --issues-only --quiet

# Only high-severity and above
seogent results <scan_id> --min-severity high --quiet

# Paginate large result set
seogent results <scan_id> --per-page 50 --cursor <next_cursor> --quiet

Handling large results

Results for sites with 30+ pages are typically 200KB–600KB+ and will exceed context limits if read directly.

When results are large:

  1. The CLI output will be auto-saved to a persisted file. Note the file path.
  2. Delegate analysis to a general-purpose subagent using the Task tool with subagent_type: "general-purpose". Give it the file path and a detailed prompt asking it to:
    • Summarize the average_score and summary breakdown (excellent/good/needs_work/poor)
    • Review site_checks for site-wide config issues
    • Identify site-wide issues — the same warning appearing on 80%+ of pages indicates a template or layout bug (fix once, improves everywhere)
    • Identify page-specific issues — warnings on only 1-2 pages
    • Return a prioritized fix list grouped by blast radius (site-wide first)
  3. Use the subagent's analysis to present findings and apply fixes.

Reading Results

All output is JSON on stdout. The full structure is:

{
  average_score,                    // Overall SEO score (0-100)
  summary {
    excellent,                      // Count of pages scoring 90+
    good,                           // Count of pages scoring 70-89
    needs_work,                     // Count of pages scoring 50-69
    poor                            // Count of pages scoring 0-49
  },
  site_checks {
    checks[{
      key,                          // e.g. "robots_txt", "sitemap"
      name,                         // Human-readable name
      status,                       // "passed" | "failed" | "warning"
      message,                      // Description of the finding
      category                      // e.g. "crawlability", "indexability"
    }],
    duplicate_titles {
      count, found,
      duplicates[]                  // Groups of pages sharing the same title
    },
    duplicate_descriptions {
      count, found,
      duplicates[]                  // Groups of pages sharing the same description
    }
  },
  top_issues[{
    issue,                          // Human-readable issue description
    count                           // Number of pages affected
  }],
  results {
    data[{
      url,                          // Page URL
      score,                        // Page score (0-100)
      grade,                        // Letter grade (A, B, C, D, F)
      failed_checks[],              // Strings — things that failed
      warnings[],                   // Strings — things that need attention
      all_checks[{
        name, key, status,          // Same fields as site_checks
        message, category, weight   // Plus weight for scoring impact
      }]
    }],
    next_cursor,                    // For pagination
    prev_cursor,
    per_page
  }
}

Analysis Strategy

Before jumping to fixes, follow this process to interpret results:

  1. Check site_checks first — these are site-wide config issues like missing robots.txt, missing sitemap, duplicate titles/descriptions. These are the highest-leverage fixes.

  2. Identify baseline pages — find pages you expect to be "clean" (e.g., the homepage or a well-maintained product page). If those pages still have warnings, those issues are likely unintentional template-level bugs.

  3. Categorize by scope:

    • Site-wide template issues: The same warning on most pages means it's in the base layout or a shared partial. Fix once → improves every page.
    • Page-specific issues: Warning on only 1-2 pages means it's in that page's content or template. Fix individually.
  4. Check for intentional issues — if the project documents deliberate SEO gaps (e.g., a test/demo site with known issues), cross-reference before fixing. Don't "fix" things that are intentionally configured that way.

  5. Prioritize by impact — fix site-wide issues first (highest blast radius), then page-specific issues sorted by page importance/traffic.

Applying Fixes

The SEOgent CLI identifies issues and provides fix recommendations. The agent's job is to read these recommendations and apply fixes directly in the codebase:

  1. Run a scan and review the JSON results
  2. For each issue, read the fix field for guidance
  3. Locate the relevant source files in the project
  4. Apply the recommended changes (meta tags, alt text, schema markup, etc.)
  5. Re-scan only the affected URLs to verify the fixes

Common High-Impact Fixes

These fixes consistently produce the largest score improvements, listed in rough priority order:

  1. Canonical tags — add self-referencing <link rel="canonical"> tags to the base template. Generate the URL from the request URI so each page gets its own canonical automatically.

  2. Open Graph tags — add og:title, og:description, og:image to the base template <head>. Use the page's existing title/description meta values so they stay in sync.

  3. JSON-LD structured data — add schema.org markup:

    • Product schema for product pages
    • Article schema for blog posts
    • Organization schema for the company/about page
    • Match the schema type to the page content.
  4. Heading hierarchy — ensure headings follow H1 → H2 → H3 with no skipped levels. Common mistake: using <h4> or <h5> in footers or navigation. Use <p>, <span>, or styled <div> elements instead.

  5. Image lazy loading — add loading="lazy" to below-fold images. Do NOT add it to hero images or anything above the fold — lazy loading above-fold content hurts perceived performance.

  6. robots.txt + sitemap.xml — ensure both exist and are accessible.

  7. Title length — titles under 30 characters are flagged. Pad short titles with the brand name or descriptors (e.g., "Products" → "Products | Acme — Quality Widgets").

Iterative Fix & Verify Workflow

After applying fixes, re-scan only the specific pages that had issues to verify the fixes were applied correctly. This is faster and more efficient than re-scanning the entire site.

Step-by-step

  1. Run the initial full-site scan (3-step async flow above).

  2. Extract the URLs that had issues from the results JSON.

  3. Apply fixes to the relevant source files in the codebase.

  4. Re-scan only the affected URLs:

seogent scan https://example.com/page-with-issues \
  --urls https://example.com/another-page \
  --urls https://example.com/third-page \
  --quiet
  1. Poll status, then fetch results (steps 2-3 of scanning workflow).

  2. If issues remain, repeat steps 3-5 until clean.

Common Workflows

Full audit with report

  1. Start scan: seogent scan <domain> --quiet
  2. Poll: sleep 15 && seogent status <scan_id> --quiet (repeat until completed)
  3. Fetch: seogent results <scan_id> --quiet
  4. Analyze using the Analysis Strategy above
  5. Present results using the format below
  6. Offer to fix issues in the codebase

Fix-and-verify cycle

  1. Scan the site (async 3-step flow)
  2. Apply fixes directly in the project source files
  3. Re-scan only the affected URLs to verify
  4. Repeat until all critical issues are resolved

Performance-focused audit

Include Core Web Vitals in the scan:

seogent scan <domain> --performance --quiet

Dead link audit

Check for broken links and missing images across the site:

seogent scan <domain> --link-check --quiet

Presenting Results to User

When presenting scan results:

  1. Summary table — show overall score, grade breakdown (excellent/good/needs_work/poor counts), and before/after comparison if this is a re-scan.

  2. Site-level issues first — report findings from site_checks (robots.txt, sitemap, duplicate titles/descriptions). These affect the whole site and are highest priority.

  3. Group page issues by scope:

    • Site-wide template fixes (high leverage) — issues appearing on most pages. Note how many pages each fix will improve.
    • Individual page fixes — issues specific to 1-2 pages.
  4. Note blast radius — for each fix, state how many pages it will improve (e.g., "Adding canonical tags to the base template fixes this on all 32 pages").

  5. Always present the plan and ask before making changes — don't start editing files without user approval.

  6. After fixes, re-scan and compare — show a before/after score comparison to demonstrate improvement.

Gotchas

Things learned the hard way:


---

## Adding the Scanning Helper Script

For more complex workflows, add a helper script that wraps the polling logic. Create `scan.sh` in your skill directory:

```bash
#!/bin/bash
# scan.sh - SEOgent scan with automatic polling
# Usage: ./scan.sh <url> [max-pages]

URL=$1
MAX_PAGES=${2:-100}

if [ -z "$URL" ]; then
  echo "Usage: ./scan.sh <url> [max-pages]"
  exit 1
fi

if [ -z "$SEOGENT_API_KEY" ]; then
  # Check if auth is configured in config file
  if ! seogent auth --show > /dev/null 2>&1; then
    echo "Error: No API key configured. Run 'seogent auth <token>' or set SEOGENT_API_KEY"
    exit 2
  fi
fi

# Start scan (--quiet for JSON output)
echo "Starting scan of $URL (max $MAX_PAGES pages)..." >&2
RESULT=$(seogent scan "$URL" --max-pages $MAX_PAGES --quiet)
SCAN_ID=$(echo $RESULT | jq -r '.scan_id')

if [ -z "$SCAN_ID" ] || [ "$SCAN_ID" = "null" ]; then
  echo "Failed to start scan" >&2
  echo "$RESULT" >&2
  exit 1
fi

echo "Scan ID: $SCAN_ID" >&2

# Poll for completion
while true; do
  sleep 15
  STATUS_JSON=$(seogent status $SCAN_ID --quiet)
  STATUS=$(echo $STATUS_JSON | jq -r '.status')

  echo "Status: $STATUS" >&2

  if [ "$STATUS" = "completed" ]; then
    seogent results $SCAN_ID --quiet
    exit 0
  fi

  if [ "$STATUS" = "failed" ]; then
    echo "Scan failed" >&2
    echo "$STATUS_JSON" >&2
    exit 1
  fi

  # "pending", "crawling", or "analyzing" — loop continues
done

Adding Example Files

The examples/ directory gives your agent reference material it can consult without running a real scan. These files improve first-scan accuracy and help the agent understand the response structure upfront.

examples/sample-output.json

A realistic example of the full JSON response structure. The agent can reference this to understand the shape of results before parsing real output:

{
  "scan_id": "scan_abc123def456",
  "status": "completed",
  "url": "https://example.com",
  "pages_scanned": 32,
  "average_score": 74,
  "summary": {
    "excellent": 2,
    "good": 12,
    "needs_work": 15,
    "poor": 3
  },
  "site_checks": {
    "checks": [
      {
        "key": "robots_txt",
        "name": "robots.txt",
        "status": "passed",
        "message": "robots.txt is accessible and valid",
        "category": "crawlability"
      },
      {
        "key": "sitemap",
        "name": "XML Sitemap",
        "status": "failed",
        "message": "No sitemap.xml found at /sitemap.xml",
        "category": "crawlability"
      }
    ],
    "duplicate_titles": {
      "count": 2,
      "found": true,
      "duplicates": [
        {
          "title": "Example Site",
          "pages": [
            "https://example.com/about",
            "https://example.com/contact"
          ]
        }
      ]
    },
    "duplicate_descriptions": {
      "count": 0,
      "found": false,
      "duplicates": []
    }
  },
  "top_issues": [
    { "issue": "Missing canonical tag", "count": 28 },
    { "issue": "Missing Open Graph tags", "count": 30 },
    { "issue": "No structured data found", "count": 25 },
    { "issue": "Heading hierarchy skipped (H1 to H4)", "count": 18 },
    { "issue": "Images missing lazy loading", "count": 12 },
    { "issue": "Title too short (under 30 characters)", "count": 8 }
  ],
  "results": {
    "data": [
      {
        "url": "https://example.com/",
        "score": 82,
        "grade": "B",
        "failed_checks": [
          "No structured data found",
          "Missing Open Graph tags"
        ],
        "warnings": [
          "Title could be longer (28 characters, recommend 30-60)"
        ],
        "all_checks": [
          {
            "name": "Meta Title",
            "key": "meta_title",
            "status": "warning",
            "message": "Title is 28 characters. Recommended: 30-60.",
            "category": "meta",
            "weight": 5
          },
          {
            "name": "Canonical Tag",
            "key": "canonical",
            "status": "failed",
            "message": "No canonical tag found. Add a self-referencing canonical.",
            "category": "indexability",
            "weight": 8
          }
        ]
      }
    ],
    "next_cursor": "eyJwYWdlIjoyLCJsaW1pdCI6NTB9",
    "prev_cursor": null,
    "per_page": 50
  }
}

examples/fix-workflow.md

A step-by-step walkthrough of fixing site-wide vs page-specific issues. Include concrete CLI examples for identifying scope, applying fixes, and verifying with targeted re-scans. See the pre-built skill download for the full file, or create your own based on the workflows described in the SKILL.md.


Installing Your Skill

Claude Code

For personal use across all projects, copy the skill directory to your personal skills folder:

mkdir -p ~/.claude/skills/
cp -r seogent-seo/ ~/.claude/skills/seogent-seo/

For a specific project, place it in your project's .claude directory:

mkdir -p .claude/skills/
cp -r seogent-seo/ .claude/skills/seogent-seo/

Claude Code detects skill changes live, so you can edit the SKILL.md during a session without restarting.

Claude.ai

For use in the Claude.ai web or mobile app:

  1. Create a ZIP file of your skill directory. The ZIP should contain the skill folder as its root.
  2. Go to Settings > Capabilities in Claude.ai.
  3. Click "Upload skill" and select your ZIP file.
  4. Claude will read the SKILL.md and display the skill details. Confirm the upload.

Note: Skills in Claude.ai require Code Execution to be enabled. Make sure this is toggled on in your Capabilities settings.

API Integration

If you're building a custom agent using the Anthropic API with code execution, you can provision skills programmatically. Refer to the Skills API Quickstart in Anthropic's documentation for implementation details.


Usage Examples

Once installed, the skill activates automatically when you mention SEO-related tasks. Here are some examples of how it works in practice.

Basic SEO Audit

User: "Check the SEO on https://mysite.com"

Claude: I'll run a full SEO scan on mysite.com now.

[Executes: seogent scan mysite.com --quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]

Scan complete! Here's what I found:

SEO Score: 72/100

Critical Issues (4):
1. Missing meta descriptions on 8 pages
2. 3 images over 1MB without optimization
3. No structured data on product pages
4. Broken canonical tags on /blog/*

Warnings (11):
- Duplicate title tags on 5 pages
- Missing alt text on 23 images
- Slow LCP (3.8s) on homepage
...

Want me to fix any of these? I can start with the critical issues.

Targeted Fix & Verify Workflow

User: "Fix the missing meta descriptions"

Claude: I'll fix the meta descriptions. Let me locate the relevant
templates in your codebase.

[Agent reads the scan results to identify the 8 affected URLs,
 finds the corresponding source files, and adds meta descriptions]

Done! I've added meta descriptions to all 8 pages. Now let me
verify the fixes by re-scanning just those URLs...

[Executes: seogent scan https://mysite.com/about \
  --urls https://mysite.com/pricing \
  --urls https://mysite.com/blog/getting-started \
  --urls https://mysite.com/blog/seo-tips \
  --urls https://mysite.com/features \
  --urls https://mysite.com/contact \
  --urls https://mysite.com/docs \
  --urls https://mysite.com/changelog \
  --quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]

Verification scan complete! All 8 pages now have meta descriptions.
No remaining issues on these URLs. Score for these pages: 94/100.

Want me to tackle the image optimization next?

Comparing Before & After

User: "Re-scan the whole site to see the overall improvement"

Claude: Running a full site scan to compare...

[Executes: seogent scan mysite.com --quiet]
[Polls: sleep 15 && seogent status <scan_id> --quiet until completed]
[Fetches: seogent results <scan_id> --quiet]

Score improved from 72 to 84!

Resolved:
- Meta descriptions now present on all pages
- 3 of 4 critical issues fixed

Remaining:
- Images still need optimization (3 files > 1MB)
- Structured data still missing on product pages

Want me to tackle the images next?

CI/CD Integration

User: "Add an SEO check to our GitHub Actions pipeline"

Claude: I'll add a step that fails the build if critical SEO issues
are found.

[Agent creates/edits .github/workflows/seo-check.yml]

- name: SEO audit
  env:
    SEOGENT_API_KEY: ${{ secrets.SEOGENT_API_KEY }}
  run: |
    SCAN_ID=$(npx seogent scan ${{ env.SITE_URL }} --quiet | jq -r '.scan_id')
    while true; do
      sleep 15
      STATUS=$(npx seogent status $SCAN_ID --quiet | jq -r '.status')
      [ "$STATUS" = "completed" ] && break
      [ "$STATUS" = "failed" ] && echo "Scan failed" && exit 1
    done
    npx seogent results $SCAN_ID --quiet > results.json
    HIGH=$(cat results.json | jq '[.top_issues[] | select(.severity == "critical" or .severity == "high")] | length')
    if [ "$HIGH" -gt 0 ]; then echo "$HIGH critical/high issues found" && exit 1; fi

Customizing the Skill

The base skill covers general SEO auditing, but you can tailor it to your specific needs.

Add Performance Scanning

If Core Web Vitals are important to you, add instructions to always include performance:

## Default Behavior
When scanning, always include --performance unless the user specifies
otherwise. Flag any LCP > 2.5s, CLS > 0.1, or INP > 200ms as
high-priority issues.

Project-Specific Conventions

Add context about your tech stack so the agent generates appropriate fixes:

## Project Context
This project uses Craft CMS with Twig templates.
- Meta tags are set in templates/_layouts/base.twig
- SEO fields use the SEOmatic plugin
- Images are served via Imager-X transforms

When suggesting fixes:
- Use SEOmatic field syntax, not raw HTML meta tags
- Reference Imager-X for image optimization suggestions
- Twig template paths follow templates/{section}/{entry}.twig

Combine with Other Skills

Skills compose naturally. If you have a deployment skill, the agent can scan, fix, and deploy in one workflow:

User: "Audit the SEO, fix critical issues, and deploy"

Claude uses seogent-seo skill for scanning and fixes,
then uses deploy skill to push changes to production.

Adapting for Other AI Agents

While this guide focuses on Claude, the SEOgent CLI works with any agent that can execute shell commands. Here's how to adapt the skill for other platforms.

OpenAI / ChatGPT Custom GPTs

Create a custom action that wraps the CLI commands. Define the scan, status, and results endpoints in your OpenAPI spec, then add instructions in the GPT's system prompt that mirror the workflows section of the SKILL.md.

Cursor / Windsurf / Other IDE Agents

Most IDE-based agents support custom rules or instructions files. Place the SKILL.md content in your project's agent configuration file (e.g., .cursorrules, .windsurfrules) or reference it from your project's context documentation.

MCP Server Approach

For agents that support the Model Context Protocol (MCP), you can wrap the SEOgent CLI as an MCP server. This exposes the scan, status, results, and cancel commands as structured tools that any MCP-compatible agent can discover and call.

Agent Skills Specification: The SKILL.md format follows the open Agent Skills specification at agentskills.io, designed for portability across platforms. Skills you create for Claude can be adapted for other agents with minimal changes.


Troubleshooting

Common Issues

CLI not found: Make sure seogent is installed globally and your PATH includes the npm global bin directory. Run npm bin -g to check.

Authentication errors: Verify your API key is set correctly. Run seogent auth --show to check the stored key, or echo $SEOGENT_API_KEY to verify the environment variable. Exit code 2 indicates an authentication problem.

Scan timeouts: Large sites can take time. Use --max-pages to limit the crawl scope for initial testing. Use the async workflow (scan → status → results) and poll until complete.

Rate limits: SEOgent enforces rate limits based on your plan. Exit code 4 indicates you've been rate limited. Check your remaining credits with seogent credits.

Skill not activating: Check that your SKILL.md frontmatter includes a descriptive description field with relevant trigger words. The agent matches your request against this description to decide whether to load the skill.


Download the Pre-Built Skill

Don't want to build it from scratch? Download the SEOgent SEO Skill and drop it into ~/.claude/skills/ (personal) or .claude/skills/ (project). It includes the SKILL.md and scanning helper script — ready to use out of the box.