Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install aaronjmars-miroshark-aeon-skills-fetch-tweetsgit clone https://github.com/aaronjmars/miroshark-aeon.gitcp miroshark-aeon/SKILL.MD ~/.claude/skills/aaronjmars-miroshark-aeon-skills-fetch-tweets/SKILL.md---
name: Fetch Tweets
description: Search X/Twitter for tweets about a token, keyword, username, or topic
var: ""
tags: [social]
---
> **${var}** — Search query for X/Twitter. **Required** — set your query in aeon.yml.
Today is ${today}. Search X for tweets matching **${var}**.
## Steps
1. **Load previously-reported tweet IDs** from two sources, then union them into `SEEN_IDS` (a set of numeric tweet IDs — NOT full URLs):
- **Persistent seen-file** (`memory/fetch-tweets-seen.txt`) — if it exists, extract the `/status/<id>` ID from every line (regex `/status/(\d+)`). This file contains every tweet URL ever reported, preventing stale tweets from cycling back into notifications once log entries age out of the 3-day window.
- **Last 3 days of `memory/logs/`** — grep each log file for `https://x.com/.../status/<id>` occurrences and extract the numeric IDs (catches URLs not yet in the seen-file).
**Why ID-based not URL-based?** Grok returns the same tweet under two different URL shapes: `x.com/<handle>/status/<id>` when it has the full text, and `x.com/i/status/<id>` when the tweet is only cited via `content.annotations[]` (see `scripts/filter-xai-tweets.py`). Across runs, those two forms refer to the same tweet but naively URL-matching treats them as different. 47% of historical seen URLs are in the `i/status` form, so ID-based dedup is the only form that's safe.
You'll use `SEEN_IDS` in step 5 to filter duplicates.
2. **Build the search prompt for Grok.** Pass `${var}` to Grok **verbatim** as the search query. Do NOT narrow it to a single angle (e.g. don't force "crypto token only", don't inject a contract address, don't filter by chain). Let Grok interpret OR/AND operators in the var as-is. The goal is broad coverage — token mentions, repo mentions, handle mentions, general chatter, all of it.
3. **Search tweets.** Use whichever path is available:
**Path A — pre-fetched cache** (preferred, when the workflow ran `scripts/prefetch-xai.sh`):
Before reading the cache, verify it was fetched for the **current** `${var}`. The prefetch script writes a sidecar `.xai-cache/fetch-tweets.query` containing the exact var it used. If the sidecar is missing or doesn't match `${var}`, the cache is stale (e.g. from a previous run after `${var}` changed in `aeon.yml`) — skip Path A and try Path B instead. Observed 2026-04-20: a stale `$AEON OR ...` cache served empty results and triggered redundant re-runs.
```bash
if [ -f .xai-cache/fetch-tweets.query ] && [ "$(cat .xai-cache/fetch-tweets.query)" = "${var}" ]; then
cat .xai-cache/fetch-tweets.json 2>/dev/null | jq -r '.output[] | select(.type == "message") | .content[] | select(.type == "output_text") | .text'
else
echo "xai-cache: miss or query mismatch, falling through to Path B"
fi
```
**Citation-source blocks:** when Grok's `output_text.text` hits its length cap, extra tweet URLs it collected during `x_search` are surfaced as annotations. `scripts/filter-xai-tweets.py` splices those back in as numbered blocks marked `Source: XAI annotation citation`. Treat them like any other tweet for dedup and reporting, but in the step 7 notification fall back to the raw URL as the link (no engagement stats are known) — e.g. render `N. x.com/handle — <title>\n[View tweet](URL)` and skip the `Likes/RTs` line. If no handle was parseable from the URL (`x.com/i/status/…`), use the URL itself as the header.
**Path B — X.AI API** (fallback, use when `XAI_API_KEY` is set and cache is empty):
```bash
FROM_DATE=$(date -u -d "yesterday" +%Y-%m-%d 2>/dev/null || date -u -v-1d +%Y-%m-%d)
TO_DATE=$(date -u +%Y-%m-%d)
curl -s -X POST "https://api.x.ai/v1/responses" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $XAI_API_KEY" \
-d '{
"model": "grok-4-1-fast",
"input": [{"role": "user", "content": "Search X for ALL tweets about: ${var}. Date range: '"$FROM_DATE"' to '"$TO_DATE"'. Return at least 10 tweets (more if available) — prioritize the most interesting, insightful, or highly-engaged posts but also include smaller accounts. For each tweet include: @handle, the full text, date posted, engagement (likes/retweets if available), and the direct link (https://x.com/handle/status/ID). Return as a numbered list."}],
"tools": [{"type": "x_search"}]
}'
```
Parse the response JSON to extract the text from the output array:
```bash
echo "$RESPONSE" | jq -r '.output[] | select(.type == "message") | .content[] | select(.type == "output_text") | .text'
```
**Path C — WebSearch fallback** (use when both cache and XAI_API_KEY are unavailable):
Use the built-in WebSearch tool to search for recent tweets. Construct a query like:
`site:x.com "${query_terms}" after:${FROM_DATE}`
Note at the top of the log entry: "XAI_API_KEY not available; results compiled via WebSearch". WebSearch rankings favour high-engagement older tweets — **prioritise results that mention a date within the last 48 hours** when possible.
4. **If no relevant tweets found** (no results, API error, or empty): log "FETCH_TWEETS_EMPTY" to `memory/logs/${today}.md`, send a one-line notification via `./notify` (e.g. `Fetch Tweets — ${today}: no new tweets found for ${var}.`), and stop.
5. **Deduplicate against `SEEN_IDS` from step 1.** For each candidate tweet URL, extract the numeric tweet ID (regex `/status/(\d+)`) and check membership in `SEEN_IDS`. Drop any candidate whose ID is already in the set — this catches the same tweet even when Grok returns it under a different URL shape (`x.com/handle/...` vs `x.com/i/...`). If ALL tweets found are already in `SEEN_IDS`: log "FETCH_TWEETS_NO_NEW: all results already reported" to `memory/logs/${today}.md`, send a one-line notification via `./notify` (e.g. `Fetch Tweets — ${today}: N results found, all already reported in last 3 days.`), and stop.
6. **Save the results** (new tweets only) to `memory/logs/${today}.md`. Include the tweet URLs, handles, and engagement so future runs can deduplicate and so downstream skills (like `tweet-allocator`) can consume them.
6b. **Update the persistent seen-file** — append each new tweet URL (one per line) to `memory/fetch-tweets-seen.txt`. Create the file if it doesn't exist. This ensures these URLs are excluded from all future runs, regardless of log rotation.
7. **Send a notification via `./notify`** with up to 10 NEW tweets (those that survived dedup). Each tweet MUST include a clickable link. Use Telegram Markdown link format: `[link text](url)`.
Format the notification like this:
```
*Top Tweets — ${var} (${today})*
1. x.com/handle — [brief summary of tweet content]
Likes: X | RTs: Y
[View tweet](https://x.com/handle/status/ID)
2. x.com/handle — [brief summary]
Likes: X | RTs: Y
[View tweet](https://x.com/handle/status/ID)
... (up to 10 tweets)
```
IMPORTANT: Do NOT use @handle format — it tags/pings users on Telegram. Use x.com/handle instead (shows the profile URL without tagging anyone). The `[View tweet](URL)` link is required so users can tap to open each tweet.
## Environment Variables Required
- `XAI_API_KEY` — X.AI API key (optional; skill falls back to WebSearch when not set, but quality is lower)