Creating a Plugin
Plugins add content processors and agent tools to EchOS without modifying core code. Each plugin is a separate workspace package inplugins/.
Steps
Write the processor
Create Security rules (non-negotiable):
plugins/my-plugin/src/processor.ts — the logic that fetches/transforms external content:- Always use
validateUrl()before fetching any URL - Always use
sanitizeHtml()on external content - Never use
eval(),Function(), orvm - Never log secrets
Create the agent tool
Create New in this example: AI-powered auto-categorization support using the
plugins/my-plugin/src/tool.ts — defines the tool the LLM agent can call:categorizeContent function from @echos/core. When autoCategorize=true, the plugin will automatically extract category, tags, and optionally gist/summary/key points from the content.Auto-discovery
Plugins are auto-discovered at runtime —
src/plugin-loader.ts scans the plugins/ directory and dynamically imports any @echos/plugin-<dirname> package. No manual import or registration in src/index.ts is needed.PluginContext API
Every plugin receives aPluginContext with:
| Property | Type | Description |
|---|---|---|
sqlite | SqliteStorage | Metadata DB (upsert, query, FTS5 search) |
markdown | MarkdownStorage | Markdown file storage (save, read, delete) |
vectorDb | VectorStorage | Vector embeddings (upsert, search) |
generateEmbedding | (text: string) => Promise<number[]> | Generate embedding vectors |
logger | Logger (Pino) | Structured logger |
config | Record<string, unknown> | App config (API keys, etc.) |
AI Categorization
Plugins can use the built-in categorization service from@echos/core:
- Uses
streamSimple+parseStreamingJsonto stream the LLM response progressively - Fires
onProgressas fields resolve: category → tags → gist (full mode) - Handles errors with safe defaults (fallback to ‘uncategorized’)
- Respects content length limits (5000 chars for lightweight, 10000 for full)
- Is safe to call without
onProgress— callers that don’t need streaming omit the last argument
Existing plugins
| Plugin | Package | Description |
|---|---|---|
| YouTube | @echos/plugin-youtube | Transcript extraction + Whisper fallback |
| Article | @echos/plugin-article | Web article extraction via Readability + DOMPurify |
@echos/plugin-twitter | Tweet/thread extraction via FxTwitter API | |
| Image | @echos/plugin-image | Image storage with metadata extraction (Sharp) |
| Resurface | @echos/plugin-resurface | Knowledge resurfacing via spaced repetition and on-this-day discovery |
| Journal | @echos/plugin-journal | Dedicated journaling, AI reflection, and daily prompts |
@echos/plugin-pdf | PDF text extraction via pdf-parse | |
| Audio | @echos/plugin-audio | Podcast/audio transcription via OpenAI Whisper |
| RSS Feeds | @echos/plugin-rss | Automated RSS/Atom feed subscription and article ingestion |
Twitter Plugin
The Twitter plugin (@echos/plugin-twitter) provides the save_tweet tool for saving tweets and threads from Twitter/X.
Features:
- Save individual tweets with full metadata (text, author, engagement stats, media URLs)
- Automatic thread unrolling — reply chains by the same author are merged into a clean article
- Quote tweet extraction
- Media URLs referenced as markdown links (images and videos)
- Optional AI categorization for automatic tagging
- No API key required — uses the free FxTwitter API (
api.fxtwitter.com)
twitter.com/<user>/status/<id>x.com/<user>/status/<id>mobile.twitter.com/<user>/status/<id>fxtwitter.com/<user>/status/<id>vxtwitter.com/<user>/status/<id>- All formats support query parameters (
?s=20,?t=...)
- Walks up the reply chain via
replying_to_statusto find earlier tweets by the same author - Stops when a different author is reached or the chain exceeds 25 tweets
- Merges thread tweets into a clean article format, stripping self-reply @mentions
- Single tweets (no thread) are saved in blockquote format with engagement stats
Image Plugin
The image plugin (@echos/plugin-image) provides the save_image tool for storing and organizing images in the knowledge base.
Features:
- Download images from URLs or accept base64 data
- Extract metadata: dimensions, format, file size, EXIF
- Store original files in
knowledge/image/{category}/ - Create searchable markdown notes with image references
- Optional AI categorization for automatic tagging
- JPEG, PNG, GIF, WebP, AVIF, TIFF, BMP
- Maximum size: 20MB
processImage):
- Validates image format and size
- Extracts metadata using Sharp library
- Generates content-based filename hash
- Returns structured metadata and buffer
- Original file:
knowledge/image/{category}/{hash}.{ext} - Markdown note:
knowledge/note/{category}/{date}-{slug}.md - Embedded reference:

- Automatic photo handler via
bot.on('message:photo') - Downloads from Telegram API
- Passes URL to save_image tool
- Supports captions for context
Resurface Plugin
The resurface plugin (@echos/plugin-resurface) brings forgotten knowledge back to the surface through spaced repetition and serendipitous discovery. It provides a get_resurfaced agent tool and a daily scheduled job that broadcasts notes via Telegram.
Features:
- Three resurfacing strategies:
forgotten— notes you haven’t seen in 7+ days, oldest first (classic spaced repetition)on_this_day— notes created on the same calendar date in a prior yearmix(default) — blend of both for maximum serendipityrandom— random sampling of un-recently-surfaced notes (supported by both theget_resurfacedtool and scheduler config)
- Tracks a
last_surfacedtimestamp per note in SQLite — never resurfaces the same note twice within 7 days - Daily broadcast job sends 2–3 notes to Telegram with emoji labels (🔮 Resurfaced, 📅 On this day, 🎲 Discovery)
- On-demand access via the
get_resurfacedtool
- “surprise me”
- “what did I save before?”
- “on this day”
- “rediscover something”
- “show me something old”
- “random note”
“Schedule a daily knowledge resurfacing at 9am.”The agent will create a
resurface schedule with cron 0 9 * * *. You can customize it:
| Key | Type | Default | Description |
|---|---|---|---|
mode | 'forgotten' | 'on_this_day' | 'random' | 'mix' | 'mix' | Resurfacing strategy |
limit | number | 3 | Number of notes to broadcast (max 10) |
PDF Plugin
The PDF plugin (@echos/plugin-pdf) provides the save_pdf tool for extracting and saving text from PDF documents.
Features:
- Download PDFs from public http(s) URLs and extract text via
pdf-parse(pure JS, no native deps) - Preserves page count in the note header/body; stores author and source URL in frontmatter metadata (when available)
- Enforces a 10 MiB PDF download size limit; larger binaries are rejected with a clear error
- Truncates oversized content gracefully (max 500 000 chars), appending
[content truncated due to size limit]and marking the “Extracted characters” field as(truncated) - Fails clearly on password-protected or corrupt PDFs
- Optional AI categorization for automatic tagging
- URL validation via
validateUrl()(SSRF protection: only public http(s) URLs; private/localhost/internal hosts are blocked)
**Source:**— source URL**Pages:**— page count**Extracted characters:**— character count (with truncation notice if applicable)**Author:**— if present in PDF metadata
Audio / Podcast Plugin
The audio plugin (@echos/plugin-audio) provides the save_audio tool for transcribing podcast episodes and audio files via OpenAI Whisper and saving the transcript as a searchable knowledge note.
Requires OPENAI_API_KEY — Whisper is an OpenAI API service. The plugin fails gracefully with a clear message if the key is absent.
Features:
- Downloads audio from public http(s) URLs and transcribes via
whisper-1 - Supports:
.mp3,.wav,.m4a,.ogg,.webm,.mp4,.flac - Files > 25 MB are split into 24 MB byte-range chunks and transcribed sequentially — no ffmpeg required
- Probes file size with a HEAD request, then falls back to a
Range: bytes=0-0probe ifContent-Lengthis absent - Streams downloads with a hard 25 MB cap to prevent unbounded memory usage
- Saves notes with
inputSource: 'voice'andtype: 'note' - Optional AI categorization via Anthropic
- Respects
WHISPER_LANGUAGEconfig for language hints
**Source:**— source URL**Format:**— file extension (e.g.MP3)**File size:**— human-readable size (KB/MB)**Duration estimate:**— estimated from file size and format bitrate**Transcript length:**— character count
- A HEAD request probes
Content-Length - If absent, a
Range: bytes=0-0request reads theContent-Rangetotal size - If the total exceeds 25 MB, the file is fetched in 24 MB byte-range slices
- Each slice is transcribed separately; results are joined with
\n\n - If the server does not honour
Rangeheaders (returns 200 instead of 206), an informative error is returned
- “Save this podcast episode: [URL]”
- “Transcribe and save this interview recording”
- “Save the audio from this conference talk: [URL]“
Journal Plugin
The journal plugin (@echos/plugin-journal) provides a dedicated journaling experience with two agent tools and an optional daily prompt job.
Features:
- Dedicated
journaltool for creating journal/diary entries (replacescreate_note(type="journal")) - AI-powered
reflecttool that synthesizes journal entries over a time period - Optional
journal_promptscheduled job for daily journaling nudges via Telegram
type: 'journal' and status: 'read'. After creating, the agent always calls categorize_note for automatic tagging.
Tool: reflect
- Fetches journal entries within the date range (up to 50 entries)
- Spawns a sub-agent to synthesize patterns, mood trends, key themes, and insights
- Returns a warm, structured reflection with actionable suggestions
- Validates date ranges (max 365 days lookback)
- “reflect on my journal”
- “weekly journal review”
- “how has my week been?”
- “mood summary”
- “look back at my journaling”
“Schedule a daily journal prompt at 9pm.”The agent will create a
journal_prompt schedule with cron 0 21 * * *.
Scheduler config options:
| Key | Type | Default | Description |
|---|---|---|---|
prompt | string | Built-in journaling nudge | Custom prompt text (max 1000 chars) |
RSS Feed Plugin
The RSS plugin (@echos/plugin-rss) provides the manage_feeds tool for subscribing to RSS and Atom feeds, with automatic background polling every 4 hours.
Features:
- Subscribe to any RSS 2.0 or Atom feed via URL
- Automatic deduplication — each article is saved at most once, even if a manual refresh and a scheduled poll run concurrently (atomic guid claim before processing)
- Full article extraction via
@echos/plugin-article(Readability) — not just the feed summary - AI categorization applied automatically when
ANTHROPIC_API_KEYorLLM_API_KEYis configured - Per-feed tags: all articles from a feed inherit its configured tags plus the
rsstag - Background polling every 4 hours via a self-registering BullMQ schedule (
rss-poll) - Plugin-specific SQLite database at
{DB_PATH}/rss.db— separate fromechos.db
| Action | Description |
|---|---|
add | Subscribe to a feed. Validates the URL, fetches the feed to confirm it parses, then stores it. |
list | List all subscribed feeds with last-checked time and saved article count. |
remove | Unsubscribe from a feed. Previously saved articles are retained. |
refresh | Immediately fetch and save new articles. Omit url to refresh all feeds. |
- “Subscribe to this RSS feed: https://example.com/feed.xml, tag it as ‘news’”
- “List my RSS feed subscriptions”
- “Unsubscribe from https://example.com/feed.xml”
- “Refresh all my RSS feeds now”
rss-poll) with cron 0 */4 * * * (every 4 hours). The schedule is stored in SQLite and picked up by the ScheduleManager — no manual setup required.
To change the poll frequency, update the schedule via the agent:
“Change the RSS poll schedule to run every 2 hours.”Storage:
- Each feed entry is stored as a note with
type: article,inputSource: url, andsourceUrlpointing to the original article - Feed-specific data (subscriptions, seen guids) lives in
{DB_PATH}/rss.dband is managed entirely by the plugin - Deleting a feed subscription cascades to its entry records in
rss.db— no orphan rows