Skills add specialized behaviors, domain knowledge, and context-aware triggers to your agent through structured prompts.
This guide shows how to implement skills in the SDK. For conceptual overview, see Skills Overview.OpenHands supports an extended version of the AgentSkills standard with optional keyword triggers.
from openhands.sdk.context import Skill, KeywordTriggerSkill( name="encryption-helper", content="Use the encrypt.sh script to encrypt messages.", trigger=KeywordTrigger(keywords=["encrypt", "decrypt"]),)
When user says “encrypt this”, the content is injected into the message:
Copy
Ask AI
<EXTRA_INFO>The following information has been included based on a keyword match for "encrypt".Skill location: /path/to/encryption-helperUse the encrypt.sh script to encrypt messages.</EXTRA_INFO>
You can install AgentSkills into a persistent directory and manage them through
openhands.sdk.skills. Skills are stored under
~/.openhands/skills/installed/ with a .installed.json metadata file that
records an enabled flag. list_installed_skills() returns all installed
skills, while load_installed_skills() returns only those with
enabled=true.The public lifecycle API includes install_skill(), update_skill(),
enable_skill(), disable_skill(), and uninstall_skill(), which gives the
CLI a clean SDK surface for /skill install, /skill enable,
/skill disable, and /skill uninstall.
This example mirrors the installed-plugin lifecycle example, but for
AgentSkills. It installs sample skills, lists them, toggles the
persistent enabled flag, and uninstalls one skill while leaving the
other available.
"""Example: Installing and Managing SkillsThis example demonstrates installed skill lifecycle operations in the SDK:1. Install skills from local paths into persistent storage2. List tracked skills and load only the enabled ones3. Inspect the `.installed.json` metadata file and `enabled` flag4. Disable and re-enable a skill without reinstalling it5. Uninstall a skill while leaving other installed skills availableFor marketplace installation flows, see:`examples/01_standalone_sdk/43_mixed_marketplace_skills/`."""import jsonimport tempfilefrom pathlib import Pathfrom openhands.sdk.skills import ( disable_skill, enable_skill, install_skill, list_installed_skills, load_installed_skills, uninstall_skill,)script_dir = Path(__file__).resolve().parentexample_skills_dir = script_dir.parent / "01_loading_agentskills" / "example_skills"def print_state(label: str, installed_dir: Path) -> None: """Print tracked, loaded, and persisted skill state.""" print(f"\n{label}") print("-" * len(label)) installed = list_installed_skills(installed_dir=installed_dir) print("Tracked skills:") for info in installed: print(f" - {info.name} (enabled={info.enabled}, source={info.source})") loaded = load_installed_skills(installed_dir=installed_dir) print(f"Loaded skills: {[skill.name for skill in loaded]}") metadata = json.loads((installed_dir / ".installed.json").read_text()) print("Metadata file:") print(json.dumps(metadata, indent=2))def demo_install_skills(installed_dir: Path) -> list[str]: """Install the sample skills into the isolated installed directory.""" print("\n" + "=" * 60) print("DEMO 1: Installing local skills") print("=" * 60) installed_names: list[str] = [] for skill_dir in sorted(example_skills_dir.iterdir()): if not skill_dir.is_dir(): continue info = install_skill(source=str(skill_dir), installed_dir=installed_dir) installed_names.append(info.name) print(f"✓ Installed: {info.name}") print(f" Source: {info.source}") print(f" Path: {info.install_path}") return installed_namesdef demo_list_and_load_skills(installed_dir: Path) -> None: """List tracked skills and load them as runtime Skill objects.""" print("\n" + "=" * 60) print("DEMO 2: Listing and loading installed skills") print("=" * 60) installed = list_installed_skills(installed_dir=installed_dir) print("Tracked skills:") for info in installed: desc = (info.description or "No description")[:60] print(f" - {info.name} (enabled={info.enabled})") print(f" Description: {desc}...") loaded = load_installed_skills(installed_dir=installed_dir) print(f"\nLoaded {len(loaded)} skill(s):") for skill in loaded: desc = (skill.description or "No description")[:60] print(f" - {skill.name}: {desc}...")def demo_enable_disable_skill(installed_dir: Path, skill_name: str) -> None: """Disable then re-enable a skill and show the persisted metadata.""" print("\n" + "=" * 60) print("DEMO 3: Disabling and re-enabling a skill") print("=" * 60) print_state("Before disable", installed_dir) assert disable_skill(skill_name, installed_dir=installed_dir) is True print_state("After disable", installed_dir) assert skill_name not in [ skill.name for skill in load_installed_skills(installed_dir=installed_dir) ] metadata = json.loads((installed_dir / ".installed.json").read_text()) assert metadata["skills"][skill_name]["enabled"] is False assert enable_skill(skill_name, installed_dir=installed_dir) is True print_state("After re-enable", installed_dir) metadata = json.loads((installed_dir / ".installed.json").read_text()) assert metadata["skills"][skill_name]["enabled"] is True assert skill_name in [ skill.name for skill in load_installed_skills(installed_dir=installed_dir) ]def demo_uninstall_skill( installed_dir: Path, skill_name: str, remaining_skill_name: str) -> None: """Uninstall one skill and confirm the other skill remains available.""" print("\n" + "=" * 60) print("DEMO 4: Uninstalling a skill") print("=" * 60) assert uninstall_skill(skill_name, installed_dir=installed_dir) is True print_state("After uninstall", installed_dir) assert not (installed_dir / skill_name).exists() metadata = json.loads((installed_dir / ".installed.json").read_text()) assert skill_name not in metadata["skills"] assert remaining_skill_name in metadata["skills"]if __name__ == "__main__": with tempfile.TemporaryDirectory() as tmpdir: installed_dir = Path(tmpdir) / "installed-skills" installed_dir.mkdir(parents=True) installed_names = demo_install_skills(installed_dir) demo_list_and_load_skills(installed_dir) demo_enable_disable_skill(installed_dir, skill_name="rot13-encryption") demo_uninstall_skill( installed_dir, skill_name="rot13-encryption", remaining_skill_name="code-style-guide", ) remaining_names = [ info.name for info in list_installed_skills(installed_dir=installed_dir) ] assert remaining_names == ["code-style-guide"] assert sorted(installed_names) == ["code-style-guide", "rot13-encryption"] print("\nEXAMPLE_COST: 0")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.
Use a marketplace when you want to install a curated mix of local and remote
AgentSkills in one step. The example below shows how to define a marketplace,
install all listed skills, and inspect the installed metadata.
"""Example: Mixed Marketplace with Local and Remote SkillsThis example demonstrates how to create a marketplace that includes both:1. Local skills hosted in your project directory2. Remote skills from GitHub (OpenHands/extensions repository)The marketplace.json schema supports source paths in these formats:- Local paths: ./path, ../path, /absolute/path, ~/path, file:///path- GitHub URLs: https://github.com/{owner}/{repo}/blob/{branch}/{path}This pattern is useful for teams that want to:- Maintain their own custom skills locally- Reference specific skills from remote repositories- Create a curated skill set for their specific workflowsDirectory Structure: 43_mixed_marketplace_skills/ ├── .plugin/ │ └── marketplace.json # Marketplace with local and remote skills ├── skills/ │ └── greeting-helper/ │ └── SKILL.md # Local skill content ├── main.py # This file └── README.md # DocumentationUsage: # Install all skills from marketplace to ~/.openhands/skills/installed/ python main.py --install # Force reinstall (overwrite existing) python main.py --install --force # Show installed skills python main.py --list"""import sysfrom pathlib import Pathfrom openhands.sdk.plugin import Marketplacefrom openhands.sdk.skills import ( install_skills_from_marketplace, list_installed_skills,)def main(): script_dir = Path(__file__).parent if "--list" in sys.argv: # List installed skills print("=" * 80) print("Installed Skills") print("=" * 80) installed = list_installed_skills() if not installed: print("\nNo skills installed.") print("Run with --install to install skills from the marketplace.") else: for info in installed: desc = (info.description or "No description")[:60] print(f"\n {info.name}") print(f" Description: {desc}...") print(f" Source: {info.source}") return if "--install" in sys.argv: # Install skills from marketplace print("=" * 80) print("Installing Skills from Marketplace") print("=" * 80) print(f"\nMarketplace directory: {script_dir}") force = "--force" in sys.argv installed = install_skills_from_marketplace(script_dir, force=force) print(f"\n\nInstalled {len(installed)} skills:") for info in installed: print(f" - {info.name}") # Show all installed skills print("\n" + "=" * 80) print("All Installed Skills") print("=" * 80) all_installed = list_installed_skills() for info in all_installed: desc = (info.description or "No description")[:50] print(f" - {info.name}: {desc}...") return # Default: show marketplace info print("=" * 80) print("Marketplace Information") print("=" * 80) print(f"\nMarketplace directory: {script_dir}") marketplace = Marketplace.load(script_dir) print(f"Name: {marketplace.name}") print(f"Description: {marketplace.description}") print(f"Skills defined: {len(marketplace.skills)}") print("\nSkills:") for entry in marketplace.skills: source_type = "remote" if entry.source.startswith("http") else "local" print(f" - {entry.name} ({source_type})") print(f" Source: {entry.source}") if entry.description: print(f" Description: {entry.description}") print("\n" + "-" * 80) print("Usage:") print(" python main.py --install # Install all skills") print(" python main.py --install --force # Force reinstall") print(" python main.py --list # List installed skills")if __name__ == "__main__": main()
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.
import osfrom pydantic import SecretStrfrom openhands.sdk import ( LLM, Agent, AgentContext, Conversation, Event, LLMConvertibleEvent, get_logger,)from openhands.sdk.context import ( KeywordTrigger, Skill,)from openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalToollogger = get_logger(__name__)# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( usage_id="agent", model=model, base_url=base_url, api_key=SecretStr(api_key),)# Toolscwd = os.getcwd()tools = [ Tool( name=TerminalTool.name, ), Tool(name=FileEditorTool.name),]# AgentContext provides flexible ways to customize prompts:# 1. Skills: Inject instructions (always-active or keyword-triggered)# 2. system_message_suffix: Append text to the system prompt# 3. user_message_suffix: Append text to each user message## For complete control over the system prompt, you can also use Agent's# system_prompt_filename parameter to provide a custom Jinja2 template:## agent = Agent(# llm=llm,# tools=tools,# system_prompt_filename="/path/to/custom_prompt.j2",# system_prompt_kwargs={"cli_mode": True, "repo": "my-project"},# )## See: https://docs.openhands.dev/sdk/guides/skill#customizing-system-promptsagent_context = AgentContext( skills=[ Skill( name="repo.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", # source is optional - identifies where the skill came from # You can set it to be the path of a file that contains the skill content source=None, # trigger determines when the skill is active # trigger=None means always active (repo skill) trigger=None, ), Skill( name="flarglebargle", content=( 'IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are" ), source=None, # KeywordTrigger = activated when keywords appear in user messages trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ], # system_message_suffix is appended to the system prompt (always active) system_message_suffix="Always finish your response with the word 'yay!'", # user_message_suffix is appended to each user message user_message_suffix="The first character of your response should be 'I'", # You can also enable automatic load skills from # public registry at https://github.com/OpenHands/extensions load_public_skills=True,)# Agentagent = Agent(llm=llm, tools=tools, agent_context=agent_context)llm_messages = [] # collect raw LLM messagesdef conversation_callback(event: Event): if isinstance(event, LLMConvertibleEvent): llm_messages.append(event.to_llm_message())conversation = Conversation( agent=agent, callbacks=[conversation_callback], workspace=cwd)print("=" * 100)print("Checking if the repo skill is activated.")conversation.send_message("Hey are you a grumpy cat?")conversation.run()print("=" * 100)print("Now sending flarglebargle to trigger the knowledge skill!")conversation.send_message("flarglebargle!")conversation.run()print("=" * 100)print("Now triggering public skill 'github'")conversation.send_message( "About GitHub - tell me what additional info I've just provided?")conversation.run()print("=" * 100)print("Conversation finished. Got the following LLM messages:")for i, message in enumerate(llm_messages): print(f"Message {i}: {str(message)[:200]}")# Report costcost = llm.metrics.accumulated_costprint(f"EXAMPLE_COST: {cost}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.
Skills are defined with a name, content (the instructions), and an optional trigger:
Copy
Ask AI
agent_context = AgentContext( skills=[ Skill( name="AGENTS.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", trigger=None, # Always active ), Skill( name="flarglebargle", content='IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are", trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ])
The SKILL.md file defines the skill with YAML frontmatter:
Copy
Ask AI
---name: my-skill # Required (standard)description: > # Required (standard) A brief description of what this skill does and when to use it.license: MIT # Optional (standard)compatibility: Requires bash # Optional (standard)metadata: # Optional (standard) author: your-name version: "1.0"triggers: # Optional (OpenHands extension) - keyword1 - keyword2---# Skill ContentInstructions and documentation for the agent...
Keywords that auto-activate this skill (OpenHands extension)
license
No
License name
compatibility
No
Environment requirements
metadata
No
Custom key-value pairs
Add triggers to make your SKILL.md keyword-activated by matching a user prompt. Without triggers, the skill can only be triggered by the agent, not the user.
"""Example: Loading Skills from Disk (AgentSkills Standard)This example demonstrates how to load skills following the AgentSkills standardfrom a directory on disk.Skills are modular, self-contained packages that extend an agent's capabilitiesby providing specialized knowledge, workflows, and tools. They follow theAgentSkills standard which includes:- SKILL.md file with frontmatter metadata (name, description, triggers)- Optional resource directories: scripts/, references/, assets/The example_skills/ directory contains two skills:- rot13-encryption: Has triggers (encrypt, decrypt) - listed in <available_skills> AND content auto-injected when triggered- code-style-guide: No triggers - listed in <available_skills> for on-demand accessAll SKILL.md files follow the AgentSkills progressive disclosure model:they are listed in <available_skills> with name, description, and location.Skills with triggers get the best of both worlds: automatic content injectionwhen triggered, plus the agent can proactively read them anytime."""import osimport sysfrom pathlib import Pathfrom pydantic import SecretStrfrom openhands.sdk import LLM, Agent, AgentContext, Conversationfrom openhands.sdk.context.skills import ( discover_skill_resources, load_skills_from_dir,)from openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalTool# Get the directory containing this scriptscript_dir = Path(__file__).parentexample_skills_dir = script_dir / "example_skills"# =========================================================================# Part 1: Loading Skills from a Directory# =========================================================================print("=" * 80)print("Part 1: Loading Skills from a Directory")print("=" * 80)print(f"Loading skills from: {example_skills_dir}")# Discover resources in the skill directoryskill_subdir = example_skills_dir / "rot13-encryption"resources = discover_skill_resources(skill_subdir)print("\nDiscovered resources in rot13-encryption/:")print(f" - scripts: {resources.scripts}")print(f" - references: {resources.references}")print(f" - assets: {resources.assets}")# Load skills from the directoryrepo_skills, knowledge_skills, agent_skills = load_skills_from_dir(example_skills_dir)print("\nLoaded skills from directory:")print(f" - Repo skills: {list(repo_skills.keys())}")print(f" - Knowledge skills: {list(knowledge_skills.keys())}")print(f" - Agent skills (SKILL.md): {list(agent_skills.keys())}")# Access the loaded skill and show all AgentSkills standard fieldsif agent_skills: skill_name = next(iter(agent_skills)) loaded_skill = agent_skills[skill_name] print(f"\nDetails for '{skill_name}' (AgentSkills standard fields):") print(f" - Name: {loaded_skill.name}") desc = loaded_skill.description or "" print(f" - Description: {desc[:70]}...") print(f" - License: {loaded_skill.license}") print(f" - Compatibility: {loaded_skill.compatibility}") print(f" - Metadata: {loaded_skill.metadata}") if loaded_skill.resources: print(" - Resources:") print(f" - Scripts: {loaded_skill.resources.scripts}") print(f" - References: {loaded_skill.resources.references}") print(f" - Assets: {loaded_skill.resources.assets}") print(f" - Skill root: {loaded_skill.resources.skill_root}")# =========================================================================# Part 2: Using Skills with an Agent# =========================================================================print("\n" + "=" * 80)print("Part 2: Using Skills with an Agent")print("=" * 80)# Check for API keyapi_key = os.getenv("LLM_API_KEY")if not api_key: print("Skipping agent demo (LLM_API_KEY not set)") print("\nTo run the full demo, set the LLM_API_KEY environment variable:") print(" export LLM_API_KEY=your-api-key") sys.exit(0)# Configure LLMmodel = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")llm = LLM( usage_id="skills-demo", model=model, api_key=SecretStr(api_key), base_url=os.getenv("LLM_BASE_URL"),)# Create agent context with loaded skillsagent_context = AgentContext( skills=list(agent_skills.values()), # Disable public skills for this demo to keep output focused load_public_skills=False,)# Create agent with tools so it can read skill resourcestools = [ Tool(name=TerminalTool.name), Tool(name=FileEditorTool.name),]agent = Agent(llm=llm, tools=tools, agent_context=agent_context)# Create conversationconversation = Conversation(agent=agent, workspace=os.getcwd())# Test the skill (triggered by "encrypt" keyword)# The skill provides instructions and a script for ROT13 encryptionprint("\nSending message with 'encrypt' keyword to trigger skill...")conversation.send_message("Encrypt the message 'hello world'.")conversation.run()print(f"\nTotal cost: ${llm.metrics.accumulated_cost:.4f}")print(f"EXAMPLE_COST: {llm.metrics.accumulated_cost:.4f}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.
from openhands.sdk.context.skills import discover_skill_resourcesresources = discover_skill_resources(skill_dir)print(resources.scripts) # List of script filesprint(resources.references) # List of reference filesprint(resources.assets) # List of asset filesprint(resources.skill_root) # Path to skill directory
The <location> element in <available_skills> follows the AgentSkills standard, allowing agents to read the full skill content on demand. When a triggered skill is activated, the content is injected with the location path:
Copy
Ask AI
<EXTRA_INFO>The following information has been included based on a keyword match for "encrypt".Skill location: /path/to/rot13-encryption(Use this path to resolve relative file references in the skill content below)[skill content from SKILL.md]</EXTRA_INFO>
This enables skills to reference their own scripts and resources using relative paths like ./scripts/encrypt.sh.
Here’s a skill with triggers (OpenHands extension):SKILL.md:
Copy
Ask AI
---name: rot13-encryptiondescription: > This skill helps encrypt and decrypt messages using ROT13 cipher.triggers: - encrypt - decrypt - cipher---# ROT13 Encryption SkillRun the [encrypt.sh](scripts/encrypt.sh) script with your message:\`\`\`bash./scripts/encrypt.sh "your message"\`\`\`
scripts/encrypt.sh:
Copy
Ask AI
#!/bin/bashecho "$1" | tr 'A-Za-z' 'N-ZA-Mn-za-m'
When the user says “encrypt”, the skill is triggered and the agent can use the provided script.
OpenHands maintains a public skills repository with community-contributed skills. You can automatically load these skills without waiting for SDK updates.
Skill Precedence by Name: If a skill name conflicts, your explicitly defined skills take precedence over public skills. For example, if you define a skill named code-review, the public code-review skill will be skipped entirely.Multiple Skills with Same Trigger: Skills with different names but the same trigger can coexist and will ALL be activated when the trigger matches. To add project-specific guidelines alongside public skills, use a unique name (e.g., custom-codereview-guide instead of code-review). Both skills will be triggered together.
Copy
Ask AI
# Both skills will be triggered by "/codereview"agent_context = AgentContext( load_public_skills=True, # Loads public "code-review" skill skills=[ Skill( name="custom-codereview-guide", # Different name = coexists content="Project-specific guidelines...", trigger=KeywordTrigger(keywords=["/codereview"]), ), ])
Skill Activation Behavior: When multiple skills share a trigger, all matching skills are loaded. Content is concatenated into the agent’s context with public skills first, then explicitly defined skills. There is no smart merging—if guidelines conflict, the agent sees both.
You can also load public skills manually and have more control:
Copy
Ask AI
from openhands.sdk.context.skills import load_public_skills# Load all public skillspublic_skills = load_public_skills()# Use with AgentContextagent_context = AgentContext(skills=public_skills)# Or combine with custom skillsmy_skills = [ Skill(name="custom", content="Custom instructions", trigger=None)]agent_context = AgentContext(skills=my_skills + public_skills)
from openhands.sdk.context.skills import load_public_skills# Load from a custom repositorycustom_skills = load_public_skills( repo_url="https://github.com/my-org/my-skills", branch="main")
The load_public_skills() function uses git-based caching for efficiency:
First run: Clones the skills repository to ~/.openhands/cache/skills/public-skills/
Subsequent runs: Pulls the latest changes to keep skills up-to-date
Offline mode: Uses the cached version if network is unavailable
This approach is more efficient than fetching individual skill files via HTTP and ensures you always have access to the latest community skills.
Explore available public skills at github.com/OpenHands/extensions. These skills cover various domains like GitHub integration, Python development, debugging, and more.
Custom template example (custom_system_prompt.j2):
Copy
Ask AI
You are a helpful coding assistant for {{ repo_name }}.{% if cli_mode %}You are running in CLI mode. Keep responses concise.{% endif %}Follow these guidelines:- Write clean, well-documented code- Consider edge cases and error handling- Suggest tests when appropriate
Key points:
Use relative filenames (e.g., "system_prompt.j2") to load from the agent’s prompts directory
Use absolute paths (e.g., "/path/to/prompt.j2") to load from any location
Pass variables to the template via system_prompt_kwargs
The system_message_suffix from AgentContext is automatically appended after your custom prompt