.skill Format Specification

The .skill format is a portable specification for AI agent capabilities. Write once, run on any framework. This page describes the manifest format and directory structure.

Draft specification — subject to change before v1.0 release.

skill.yaml manifest

Every skill must include a skill.yaml file at the root. This manifest declares the skill's metadata, dependencies, compatibility, and interface.

yaml
# skill.yaml — the .skill manifest
name: research-loop
version: 1.0.0
description: Agentic research with gap-filling search loops
author: clawsmarket

# Framework compatibility
compatibility:
  - claude-code
  - openclaw
  - openai-agents
  - langgraph
  - crewai
  - custom-api

# Dependencies
tools:
  - tavily          # Search API
  - openrouter      # LLM provider

# Environment variables required
env:
  - TAVILY_API_KEY
  - OPENROUTER_API_KEY

# Entry points per framework
install:
  claude-code: claude skill install clawsmarket/research-loop
  openclaw: openclaw skill add research-loop
  pip: pip install clawsmarket-skills[research-loop]

# Skill interface
inputs:
  - name: topic
    type: string
    description: The subject to research
  - name: max_rounds
    type: integer
    default: 3
    description: Maximum research iterations

outputs:
  - name: brief
    type: string
    description: Structured research brief (2-4K chars)
  - name: sources
    type: array
    description: List of source URLs used

Directory structure

Skills use framework-specific adapters to handle differences in how each framework invokes capabilities. The core logic lives in src/, with thin adapters in adapters/.

file tree
research-loop/
├── skill.yaml          # Manifest (required)
├── README.md           # Documentation
├── src/
│   ├── __init__.py     # Python entry point
│   ├── agent.py        # Core research logic
│   └── search.py       # Tavily wrapper
├── adapters/
│   ├── claude_code.py  # Claude Code adapter
│   ├── openclaw.py     # OpenClaw adapter
│   ├── openai.py       # OpenAI Agents adapter
│   ├── langgraph.py    # LangGraph adapter
│   └── crewai.py       # CrewAI adapter
└── tests/
    └── test_agent.py

Key concepts

Framework adapters

Each adapter translates between the framework's invocation pattern and the skill's core interface. Claude Code uses tool definitions, OpenClaw uses skill manifests, LangGraph uses nodes — the adapter handles the translation.

Tool dependencies

Skills declare which tools they need. The runtime ensures the required environment variables are set before execution. Missing tools produce clear error messages.

Typed inputs/outputs

Skills declare their interface with typed inputs and outputs. This enables frameworks to validate arguments, generate documentation, and compose skills into larger workflows.

Versioning

Skills follow semantic versioning. Breaking changes to the interface require a major version bump. The registry tracks compatibility between skill versions and framework versions.

API-first skill definitions

Instead of installing skill packages, agents fetch a machine-readable definition via API and execute using their own tools. This is framework-agnostic and requires zero installation.

How it works

  1. Agent fetches the skill definition: GET /api/skills/{slug}/definition
  2. Definition includes step-by-step instructions, typed inputs/outputs, env vars, and prompt templates
  3. Agent follows the instructions using its own tool-calling capabilities
  4. No packages to install, no adapters to write — works with any framework

Definition response format

json
{
  "slug": "research-loop",
  "name": "Research Loop",
  "version": "1.0.0",
  "description": "Agentic research with gap-filling search loops",
  "compatibility": ["claude-code", "openclaw", ...],
  "tools": ["tavily", "openrouter"],
  "env": ["TAVILY_API_KEY", "OPENROUTER_API_KEY"],
  "inputs": [
    { "name": "topic", "type": "string", "description": "...", "required": true },
    { "name": "max_rounds", "type": "number", "default": 3 }
  ],
  "outputs": [
    { "name": "brief", "type": "string", "description": "..." },
    { "name": "sources", "type": "array", "description": "..." }
  ],
  "instructions": [
    "Accept a research topic and optional context",
    "Round 1: Send topic to LLM to extract facts and identify gaps",
    "Search via Tavily API for each gap",
    "Repeat up to max_rounds",
    "Synthesize into a structured brief"
  ],
  "prompt_template": "You are a research analyst...",
  "examples": [{ "input": { "topic": "..." }, "output": { "brief": "..." } }],
  "fetch": "GET /api/skills/research-loop/definition",
  "docs": "/docs/skill-format"
}

Example: fetching and using a skill

bash
# 1. Fetch the skill definition
curl -s https://www.clawsmarket.com/api/skills/email-personalization/definition | jq

# 2. Read the instructions, inputs, and outputs
# 3. Call the skill by following the instructions with your own tools
# 4. Provide the required inputs and collect the outputs

# Example: an agent reads the definition and executes each instruction step
# using its native tool-calling (Bash, HTTP requests, LLM calls, etc.)

Supported frameworks

Claude Code
Full support
OpenClaw
Full support
OpenAI Agents
Full support
LangGraph
Partial support
CrewAI
Partial support
Custom API
Full support