Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install codingthefuturewithai-claude-code-primitives-plugins-teamcraft-glgd-skills-pipeline-healthgit clone https://github.com/codingthefuturewithai/claude-code-primitives.gitcp claude-code-primitives/SKILL.MD ~/.claude/skills/codingthefuturewithai-claude-code-primitives-plugins-teamcraft-glgd-skills-pipeline-health/SKILL.md--- name: teamcraft-glgd:pipeline-health description: Interpret CI/CD pipeline status with context — not raw logs, but an explanation of what failed and why. Optionally create a trackable GitLab issue from a persistent pipeline failure. Use when CI is failing, the build is broken, the pipeline is red, asking "why did the pipeline fail", or wanting to understand a CI/CD error. Also run when an MR's pipeline won't pass, deployments are stuck, or a recurring pipeline failure needs to be tracked as an issue. argument-hint: "[branch name or pipeline ID — optional]" disable-model-invocation: true user-invocable: true allowed-tools: - mcp__gitlab__list_projects - mcp__gitlab__list_pipelines - mcp__gitlab__get_pipeline - mcp__gitlab__list_pipeline_jobs - mcp__gitlab__get_pipeline_job - mcp__gitlab__get_pipeline_job_output - mcp__gitlab__create_issue - mcp__gitlab__list_labels - mcp__gitlab__retry_pipeline - mcp__gitlab__retry_pipeline_job - mcp__gitlab__cancel_pipeline --- ## Goal Surface what failed in a pipeline and why — not raw logs, but an interpreted explanation a developer or DevOps engineer can act on. When a failure is persistent, offer to create a trackable GitLab issue from it so it does not get lost. ## Hard Constraints - A branch name or pipeline ID from `$ARGUMENTS` is the starting point. If neither was provided, ask before searching. - Never auto-create issues. Always show the draft issue and get explicit confirmation before creating anything in GitLab. - Never retry or cancel a pipeline without explicit instruction from the user. Offer these as options after presenting findings — do not take action on your own. - This skill uses only MCP tools. It works without codebase access and is usable in any environment. ## Identify the Pipeline If a branch name or pipeline ID was provided in `$ARGUMENTS`, use it. If not, ask the user to specify — which branch or which pipeline run they want to investigate. Find the GitLab project from `.teamcraft/project.md` if it exists in the environment. If not, use `mcp__gitlab__list_projects` to see what is visible, surface the results, and ask the user which project they want to investigate. Never ask them to supply a namespace string. Use `mcp__gitlab__list_pipelines` to locate the pipeline. For a branch, the most recent pipeline is typically the relevant one — confirm with the user if there is ambiguity. ## Read the Failure Fetch the pipeline details and its jobs. For any failed or errored job, use `mcp__gitlab__get_pipeline_job_output` to read the actual log output. Read the logs carefully. The goal is not to surface the log — it is to understand what the log is saying. Parse the failure at the right level of abstraction: Not: "The test stage failed." But: "The auth service integration tests are failing because `JWT_SECRET` is not set in the CI environment. The test runner is erroring on the first test that calls the token validation endpoint, and all subsequent tests in that suite are also failing as a result." That level of explanation is the deliverable. A developer reading this should know what broke and what to do about it. ## Present Interpreted Findings Present the failure explanation to the user. Cover: **What failed** — which stage, which job, and the nature of the failure in plain language. **Why it failed** — the root cause as read from the logs. Be specific: environment variables, dependency issues, test failures with test names, build errors with the error message. **What to fix** — a concrete, actionable suggestion. Not generic advice. What specifically to change, add, or investigate. If multiple jobs failed, cover each one. Group failures that share a common root cause. ## Check for Persistence If the user or the context suggests this failure has appeared across multiple pipeline runs on this branch, note that explicitly. A one-time flaky test and a structural configuration failure are different problems — the data helps distinguish them. If the failure appears persistent (same job, same error across multiple runs), offer to create a GitLab issue to make it trackable. ## Offer to Create a Trackable Issue If the failure is persistent and the user wants to track it, read `references/example-bug-issue.md` and draft a bug issue matching its structure. The draft must include all sections from the reference: Problem Statement, Environment, Steps to Reproduce, Current/Expected Behavior, Debug Information, Investigation Areas, Potential Approaches, Testing Requirements, Priority, and Labels. Populate from what the pipeline logs revealed. Show the complete draft. Do not create the issue until the user confirms. After confirmation, create the issue in GitLab. Share the issue URL and IID. ## Offer Next Steps After presenting findings, offer the developer their options: - **Retry a specific failed job** — if the failure looks transient (network error, flaky test), offer to retry the job via `mcp__gitlab__retry_pipeline_job` - **Retry the full pipeline** — if appropriate, via `mcp__gitlab__retry_pipeline` - **Cancel a running pipeline** — if the current pipeline is stuck or irrelevant, via `mcp__gitlab__cancel_pipeline` Present these as choices. Do not act without explicit instruction.