Skip to main content

Scheduler

Background job processing for digests, reminder notifications, and content processing. The scheduling system is fully dynamic, storing cron patterns and job configurations in the SQLite database rather than using static environment variables.

Overview

The scheduler uses BullMQ with Redis to run periodic jobs. It is always enabled — Redis is a required dependency. If Redis is unreachable at startup, EchOS exits with an actionable error message.

Prerequisites

  • Redis instance running (default: redis://localhost:6379)

Configuration

Scheduling relies on the database and dynamic APIs. Only the connection variable is maintained in .env:
VariableDescriptionDefaultExample
REDIS_URLRedis connection URLredis://localhost:6379redis://myhost:6379

Managing Schedules

You can create, update, and delete schedules dynamically without restarting the application. EchOS provides two ways to do this:
  1. Agent Tool: The manage_schedule tool allows the AI assistant to manage background jobs on your behalf.
  2. Web API: The @echos/web plugin exposes RESTful CRUD endpoints under /api/schedules (requires WEB_API_KEY).
Every schedule contains a JSON config field designed to pass custom arguments to its plugin’s job processor (e.g., custom prompts or categorization rules).

Jobs

Daily Digest (digest)

The Digest Plugin generates summaries of your latest notes, memories, and upcoming reminders.
  • Type: digest
  • Config options: prompt (varies the tone), lookbackDays (how far back to summarize), categories (filters notes by tags).
  • Default behavior: Creates a temporary AI agent and sends the summary via Notification Service.
  • Reading Queue section: The digest automatically calls reading_queue (limit 3) and includes a “Reading Queue” section showing your top 3 unread items.

Reminder Check (reminder-check)

Queries the SQLite database for pending reminders with a due date in the past. Due reminders are sorted by priority (high first) and sent as a notification. Note: The system internally schedules a quick check every minute to dispatch due reminders, distinct from customizable user schedules.

Trash Purge (trash_purge)

Permanently removes notes that have been soft-deleted for more than 30 days. This job runs automatically on a fixed schedule — it cannot be customised via manage_schedule.
  • Schedule: Daily at 3 AM (0 3 * * *)
  • Behavior: Fetches all status=deleted notes, checks deleted_at, and for each note older than 30 days: removes the .md file from knowledge/.trash/, purges the SQLite row, and removes the LanceDB vector
  • Location: packages/scheduler/src/workers/trash-purge.ts
No configuration is required. Notes restored before the 30-day window are unaffected.

Content Processing (process_article, process_youtube)

Processes long-form article and YouTube URLs queued by the agent during conversations. This runs automatically when URLs are submitted; no manual schedule creation is needed.

RSS Feed Poll (rss_poll)

The RSS plugin polls all subscribed feeds for new articles and saves them as notes.
  • Type: rss_poll
  • Default schedule: Every 4 hours (0 */4 * * *) — auto-registered on first startup, no manual setup needed
  • Config options: none
  • Behavior: For each subscribed feed, fetches the RSS/Atom XML, filters entries newer than lastEntryDate, extracts full article text, applies AI categorization, and saves each new entry as a note. Deduplication is atomic — concurrent poll and refresh cannot create duplicate notes.
Example: “Change the RSS poll to run every 2 hours.” → updates the rss-poll schedule to cron 0 */2 * * *.

Knowledge Resurfacing (resurface)

The Resurface Plugin surfaces forgotten notes from your knowledge base using spaced repetition and on-this-day discovery.
  • Type: resurface
  • Config options:
    • mode'forgotten' (oldest un-surfaced first), 'on_this_day' (same calendar date in prior years), 'random', 'mix' (default)
    • limit — number of notes to broadcast (default: 3, max: 10)
  • Behavior: Queries notes where last_surfaced is NULL or older than 7 days, picks the best candidates per strategy, sends a formatted Telegram notification, and updates each note’s last_surfaced timestamp.
Example schedule: “Schedule a daily knowledge resurfacing at 9am” → cron 0 9 * * *, type resurface.

Architecture

src/index.ts

    ├── Database (SQLite `job_schedules` table)
    ├── NotificationService (from @echos/telegram or log-only fallback)

    └── @echos/scheduler
         ├── Queue (BullMQ)
         ├── ScheduleManager (Syncs Database ↔ BullMQ)
         └── Worker
              └── Job Router (Plugin-aware routing)
                   ├── digest       → DigestPlugin Processor
                   ├── resurface    → ResurfacePlugin Processor
                   ├── rss_poll     → RSSPlugin Processor
                   ├── reminder     → Built-in Reminder Processor
                   ├── content      → Built-in Content Processor
                   └── trash_purge  → Built-in Trash Purge Processor
The ScheduleManager listens for changes made through the tools and Web API, applying them instantly to the running BullMQ scheduler.

Graceful Shutdown

On SIGINT/SIGTERM, the worker and queue are closed before interfaces and storage. In-progress jobs will complete before the worker shuts down.

Example Usage

With the scheduler running, you can ask the agent:
“Create a daily digest at 8am that looks back 3 days and focuses on ‘business’ and ‘music’ categories, using a professional tone.”
The agent will use manage_schedule (action: upsert) to build and inject the exact JSON configuration and cron expression 0 8 * * *.