Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install write-script-postgresqlgit clone https://github.com/windmill-labs/windmill.gitcp windmill/system_prompts/auto-generated/skills/write-script-postgresql/SKILL.md ~/.claude/skills/write-script-postgresql/SKILL.md---
name: write-script-postgresql
description: MUST use when writing PostgreSQL queries.
---
## CLI Commands
Place scripts in a folder.
After writing, tell the user which command fits what they want to do:
- `wmill script preview <script_path>` — **default when iterating on a local script.** Runs the local file without deploying.
- `wmill script run <path>` — runs the script **already deployed** in the workspace. Use only when the user explicitly wants to test the deployed version, not local edits.
- `wmill generate-metadata` — generate `.script.yaml` and `.lock` files for the script you modified.
- `wmill sync push` — deploy local changes to the workspace. Only suggest/run this when the user explicitly asks to deploy/publish/push — not when they say "run", "try", or "test".
### Preview vs run — choose by intent, not habit
If the user says "run the script", "try it", "test it", "does it work" while there are **local edits to the script file**, use `script preview`. Do NOT push the script to then `script run` it — pushing is a deploy, and deploying just to test overwrites the workspace version with untested changes.
Only use `script run` when:
- The user explicitly says "run the deployed version" / "run what's on the server".
- There is no local script being edited (you're just invoking an existing script).
Only use `sync push` when:
- The user explicitly asks to deploy, publish, push, or ship.
- The preview has already validated the change and the user wants it in the workspace.
### After writing — offer to test, don't wait passively
If the user hasn't already told you to run/test/preview the script, offer it as a one-sentence next step (e.g. "Want me to run `wmill script preview` with sample args?"). Do not present a multi-option menu.
If the user already asked to test/run/try the script in their original request, skip the offer and just execute `wmill script preview <path> -d '<args>'` directly — pick plausible args from the script's declared parameters. The shape varies by language: `main(...)` for code languages, the SQL dialect's own placeholder syntax (`$1` for PostgreSQL, `?` for MySQL/Snowflake, `@P1` for MSSQL, `@name` for BigQuery, etc.), positional `$1`, `$2`, … for Bash, `param(...)` for PowerShell.
`wmill script preview` does not deploy, but it still executes script code and may cause side effects; run it yourself when the user asked to test/preview (or after confirming that execution is intended). `wmill sync push` and `wmill generate-metadata` modify workspace state or local files — only run these when the user explicitly asks; otherwise tell them which to run.
For a **visual** open-the-script-in-the-dev-page preview (rather than `script preview`'s run-and-print-result), use the `preview` skill.
Use `wmill resource-type list --schema` to discover available resource types.
# PostgreSQL
Arguments are obtained directly in the statement with `$1::{type}`, `$2::{type}`, etc.
Name the parameters by adding comments at the beginning of the script (without specifying the type):
```sql
-- $1 name1
-- $2 name2 = default_value
SELECT * FROM users WHERE name = $1::TEXT AND age > $2::INT;
```
## Receiving an S3Object as a script parameter
Declare the arg with type `(s3object)`. Windmill renders an S3 file picker for
it, downloads the file, and binds it as a `jsonb` parameter — Parquet/CSV files
are decoded server-side into a JSON array of records, JSON/JSONL pass through.
Consume with `jsonb_to_recordset` (or any `jsonb` API):
```sql
-- $1 file (s3object)
SELECT *
FROM jsonb_to_recordset($1::jsonb) AS r(id INT, name TEXT);
```
## Streaming query results to S3
Add a `-- s3` directive at the top of the script to stream the result set to S3
instead of returning rows. Windmill writes the file and returns its `S3Object`
as the script result.
```sql
-- s3 prefix=exports/users format=parquet
SELECT id, name FROM users;
```
All keys are optional: `prefix` (object key prefix), `storage` (named storage —
omit to use the workspace default), `format` (`json` (default), `parquet`, or
`csv`). Use this for large result sets — rows stream directly to S3 instead of
being buffered as the script return value.