GitHub shipped gh skill in public preview on April 16, 2026, giving developers a single command to install, pin, search, and publish agent skills across every major AI coding host. The same week, security researchers proved that the broader skills ecosystem is already systematically compromised. Those two facts belong in the same sentence.
What gh skill Actually Does
GitHub CLI v2.90.0 introduces five subcommands: install, preview, search, update, and publish (Release GitHub CLI 2.90.0 · cli/cli). The command resolves the host-directory problem automatically — depending on which agent is active, it places skills in .github/skills, .claude/skills, .agents/skills, or the user home-directory equivalents (Manage agent skills with GitHub CLI — GitHub Changelog).
At launch, supported hosts include GitHub Copilot, Claude Code, Cursor, Codex, Gemini CLI, and Antigravity, with broader adoption underway (Manage agent skills with GitHub CLI — GitHub Changelog). Skills are filesystem-based, not API-based: any agent that can read a directory structure and parse Markdown can consume one (About agent skills — GitHub Docs). That design choice is both the feature and the vulnerability.
The Open Agent Skills Spec: Portability and Its Limits
The Agent Skills specification’s promise is that a skill written once runs on any compliant host. That claim is architecturally correct: the filesystem-plus-Markdown model is genuinely host-neutral (About agent skills — GitHub Docs). In practice, host-specific context-loading differences and behavioral variations mean skills still need per-host validation before you rely on them in production. “Runs everywhere” understates the work.
The upside is real nonetheless. Skills live at the project level or user home level, which means they travel with the repo, get committed to version control, and remain auditable in the same way as any other dotfile (About agent skills — GitHub Docs).
Version Pinning: What SHA Tracking Does and Doesn’t Protect
The --pin flag locks a skill to a specific tag or commit SHA. Pinned skills are skipped during gh skill update --all (Manage agent skills with GitHub CLI — GitHub Changelog). Git tree SHAs are embedded in SKILL.md frontmatter as provenance metadata, so the tooling can detect real content changes rather than version label bumps (Manage agent skills with GitHub CLI — GitHub Changelog).
This is a meaningful improvement over unversioned installs. However, SHA-based tracking does not prevent a threat actor from publishing a clean skill, accumulating downloads, and later replacing content via a force-push to the same tag. Tag immutability is opt-in on most Git hosts, not guaranteed. The provenance system catches drift after the fact; it does not prevent a malicious initial payload.
Supply Chain Reality: The ToxicSkills Numbers
Snyk’s ToxicSkills study, published in April 2026, scanned 3,984 skills from ClawHub and skills.sh (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study). The findings:
- 36.82% (1,467 skills) carry security issues of any severity (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study)
- 13.4% (534 skills) contain at least one critical-level flaw (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study)
- 91% of confirmed malicious skills combine prompt injection with traditional malware techniques (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study)
- 76 confirmed malicious payloads identified through manual review (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study)
- 10.9% of ClawHub skills contain hardcoded secrets (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study)
One important scope note: the ToxicSkills study covers ClawHub and skills.sh, not GitHub’s own skill registry. Conflating the two inflates the risk number specifically for the gh skill surface. The broader threat class is real; the specific denominator matters.
Attack categories identified in the wild include password-protected ZIP archives with obfuscated install scripts, base64-encoded commands that exfiltrate AWS credentials, and instructions that disable agent safety mechanisms (ToxicSkills: Snyk Finds Prompt Injection in 36%, 1,467 Malicious Payloads in Agent Skills Supply Chain Study).
The Silent Exfiltration Vector
Mitiga demonstrated a concrete attack path using a trojanized skill (AI Agent Supply Chain Risk: Silent Codebase Exfiltration via Skills — Mitiga). The sequence: the agent silently copies the entire local codebase, adds a remote pointing to an attacker-controlled server, and pushes all contents. The attack requires only four user interactions and leaves audit logs empty (AI Agent Supply Chain Risk: Silent Codebase Exfiltration via Skills — Mitiga).
This attack class is notable because it operates entirely within behaviors that look like normal agent activity. There is no obvious error, no permission dialog, no anomalous process. The skill instructs the agent; the agent executes file system and git operations it is expected to perform.
Filesystem-based skills have no sandbox boundary between the skill’s instructions and the agent’s full operating environment. That is the same property that makes them portable across hosts.
Practical Hygiene
Given the above, the minimum viable approach to using gh skill:
- Run
gh skill previewbefore every install. Read the output in full. This is the only inspection step before the skill has filesystem access (Manage agent skills with GitHub CLI — GitHub Changelog). - Pin everything with
--pinto a commit SHA, not a mutable tag. A SHA is harder to silently replace (Manage agent skills with GitHub CLI — GitHub Changelog). - Prefer skills from sources you can audit: organization-owned repos where you can inspect the commit history are lower risk than anonymous registry entries.
- Audit installed skills regularly. Tools like mcp-scan can surface prompt injection patterns in installed skill files.
- Treat skills like dependencies, not configuration. They execute with the same authority as your agent session.
FAQ
Does gh skill have any built-in malware scanning?
No. As of April 20, 2026, GitHub does not verify skills before they appear in registries accessible via gh skill search. The preview subcommand shows you the skill content before installation, but interpretation is left to the developer (Manage agent skills with GitHub CLI — GitHub Changelog).
If I pin to a SHA, am I protected from supply chain attacks?
Partially. A SHA pin prevents silent updates after installation. It does not protect you if the skill was already malicious at the SHA you pinned, and it does not prevent a threat actor from convincing you to install a new SHA that contains a payload. The Mitiga attack path requires only that the skill be installed once (AI Agent Supply Chain Risk: Silent Codebase Exfiltration via Skills — Mitiga).
Does the open Agent Skills spec include a sandboxing or capability model?
No. The specification is filesystem-based and relies on the host agent’s existing permission model (About agent skills — GitHub Docs). Skills can instruct an agent to perform any action that agent can perform. There is no skill-level capability boundary defined in the spec as currently published.