Skip to main content

Creating a Plugin

Plugins add content processors and agent tools to EchOS without modifying core code. Each plugin is a separate workspace package in plugins/.

Steps

1

Scaffold the package

mkdir -p plugins/my-plugin/src
Create plugins/my-plugin/package.json:
{
  "name": "@echos/plugin-my-plugin",
  "version": "0.1.0",
  "private": true,
  "type": "module",
  "main": "./dist/index.js",
  "types": "./dist/index.d.ts",
  "scripts": {
    "build": "tsc",
    "dev": "tsc --watch",
    "typecheck": "tsc --noEmit",
    "clean": "rm -rf dist"
  },
  "dependencies": {
    "@echos/shared": "workspace:*",
    "@echos/core": "workspace:*",
    "@mariozechner/pi-agent-core": "^0.52.12",
    "@mariozechner/pi-ai": "^0.52.12",
    "pino": "^9.14.0",
    "uuid": "^13.0.0"
  },
  "devDependencies": {
    "@types/node": "^25.2.3",
    "@types/uuid": "^11.0.0",
    "typescript": "^5.7.0"
  }
}
Create plugins/my-plugin/tsconfig.json:
{
  "extends": "../../tsconfig.json",
  "compilerOptions": {
    "outDir": "./dist",
    "rootDir": "./src"
  },
  "include": ["src"]
}
2

Write the processor

Create plugins/my-plugin/src/processor.ts — the logic that fetches/transforms external content:
import type { Logger } from 'pino';
import { validateUrl, sanitizeHtml } from '@echos/shared';
import type { ProcessedContent } from '@echos/shared';

export async function processMyContent(
  url: string,
  logger: Logger,
): Promise<ProcessedContent> {
  const validatedUrl = validateUrl(url); // SSRF prevention — required
  logger.info({ url: validatedUrl }, 'Processing content');

  // Fetch and extract content...
  const title = sanitizeHtml(rawTitle);   // Always sanitize external content
  const content = sanitizeHtml(rawContent);

  return {
    title,
    content,
    metadata: {
      type: 'note', // Use an existing ContentType or extend types
      sourceUrl: validatedUrl,
    },
    embedText: `${title}\n\n${content.slice(0, 3000)}`,
  };
}
Security rules (non-negotiable):
  • Always use validateUrl() before fetching any URL
  • Always use sanitizeHtml() on external content
  • Never use eval(), Function(), or vm
  • Never log secrets
3

Create the agent tool

Create plugins/my-plugin/src/tool.ts — defines the tool the LLM agent can call:
import { Type, type Static } from '@mariozechner/pi-ai';
import type { AgentTool } from '@mariozechner/pi-agent-core';
import { v4 as uuidv4 } from 'uuid';
import type { NoteMetadata } from '@echos/shared';
import type { PluginContext } from '@echos/core';
import { categorizeContent, type ProcessingMode } from '@echos/core';
import { processMyContent } from './processor.js';

const schema = Type.Object({
  url: Type.String({ description: 'URL to process' }),
  tags: Type.Optional(Type.Array(Type.String(), { description: 'Tags' })),
  category: Type.Optional(Type.String({ description: 'Category' })),
  autoCategorize: Type.Optional(
    Type.Boolean({
      description: 'Automatically categorize using AI (default: false)',
      default: false,
    }),
  ),
  processingMode: Type.Optional(
    Type.Union([Type.Literal('lightweight'), Type.Literal('full')], {
      description: 'AI processing mode: "lightweight" (category+tags) or "full" (includes summary, gist, key points). Only used if autoCategorize is true.',
      default: 'full',
    }),
  ),
});

type Params = Static<typeof schema>;

export function createMyTool(
  context: PluginContext,
): AgentTool<typeof schema> {
  return {
    name: 'save_my_content',
    label: 'Save My Content',
    description: 'Describe what this tool does — the agent reads this to decide when to use it. Optionally auto-categorize with AI.',
    parameters: schema,
    execute: async (_toolCallId, params: Params, _signal, onUpdate) => {
      onUpdate?.({
        content: [{ type: 'text', text: `Processing ${params.url}...` }],
        details: { phase: 'fetching' },
      });

      const processed = await processMyContent(params.url, context.logger);

      const now = new Date().toISOString();
      const id = uuidv4();

      let category = params.category ?? 'uncategorized';
      let tags = params.tags ?? [];
      let gist: string | undefined;

      // Auto-categorize if requested and API key available
      if (params.autoCategorize && context.config.anthropicApiKey) {
        onUpdate?.({
          content: [{ type: 'text', text: 'Categorizing content with AI...' }],
          details: { phase: 'categorizing' },
        });

        try {
          const mode: ProcessingMode = params.processingMode ?? 'full';
          const result = await categorizeContent(
            processed.title,
            processed.content,
            mode,
            context.config.anthropicApiKey as string,
            context.logger,
            (message) => onUpdate?.({ content: [{ type: 'text', text: message }], details: { phase: 'categorizing' } }),
          );

          category = result.category;
          tags = result.tags;

          if ('gist' in result) {
            gist = result.gist;
          }

          context.logger.info(
            { category, tags, mode },
            'Content auto-categorized',
          );
        } catch (error) {
          context.logger.error({ error }, 'Auto-categorization failed, using defaults');
        }
      }

      const metadata: NoteMetadata = {
        id,
        type: 'note',
        title: processed.title,
        created: now,
        updated: now,
        tags,
        links: [],
        category,
        sourceUrl: params.url,
      };
      if (gist) metadata.gist = gist;

      // Save to all three storage layers
      const filePath = context.markdown.save(metadata, processed.content);
      context.sqlite.upsertNote(metadata, processed.content, filePath);

      if (processed.embedText) {
        try {
          const vector = await context.generateEmbedding(processed.embedText);
          await context.vectorDb.upsert({
            id,
            text: processed.embedText,
            vector,
            type: metadata.type,
            title: processed.title,
          });
        } catch {
          // Embedding failure is non-fatal
        }
      }

      return {
        content: [
          {
            type: 'text' as const,
            text: `Saved "${processed.title}" (id: ${id})\nCategory: ${category}\nTags: [${tags.join(', ')}]${gist ? `\nGist: ${gist}` : ''}`,
          },
        ],
        details: { id, filePath, title: processed.title, category, tags },
      };
    },
  };
}
New in this example: AI-powered auto-categorization support using the categorizeContent function from @echos/core. When autoCategorize=true, the plugin will automatically extract category, tags, and optionally gist/summary/key points from the content.
4

Export the plugin

Create plugins/my-plugin/src/index.ts:
import type { EchosPlugin, PluginContext } from '@echos/core';
import { createMyTool } from './tool.js';

const myPlugin: EchosPlugin = {
  name: 'my-plugin',
  description: 'What this plugin does',
  version: '0.1.0',

  setup(context: PluginContext) {
    return [createMyTool(context)];
  },

  // Optional: cleanup on shutdown
  // teardown() { ... },
};

export default myPlugin;
5

Register it

In src/index.ts, import and register:
import myPlugin from '@echos/plugin-my-plugin';

// After creating the PluginRegistry:
pluginRegistry.register(myPlugin);
6

Wire up the workspace

Add the path mapping to root tsconfig.json:
{
  "compilerOptions": {
    "paths": {
      "@echos/plugin-my-plugin": ["./plugins/my-plugin/src/index.ts"]
    }
  }
}
Install dependencies:
pnpm install
Build and verify:
pnpm -r build
pnpm test

PluginContext API

Every plugin receives a PluginContext with:
PropertyTypeDescription
sqliteSqliteStorageMetadata DB (upsert, query, FTS5 search)
markdownMarkdownStorageMarkdown file storage (save, read, delete)
vectorDbVectorStorageVector embeddings (upsert, search)
generateEmbedding(text: string) => Promise<number[]>Generate embedding vectors
loggerLogger (Pino)Structured logger
configRecord<string, unknown>App config (API keys, etc.)

AI Categorization

Plugins can use the built-in categorization service from @echos/core:
import { categorizeContent, type ProcessingMode } from '@echos/core';

// Lightweight mode: category + tags only
const result = await categorizeContent(
  title,
  content,
  'lightweight',
  context.config.anthropicApiKey as string,
  context.logger,
  // Optional: receive progressive updates as the LLM streams its response
  (message) => onUpdate?.({ content: [{ type: 'text', text: message }], details: { phase: 'categorizing' } }),
);
// result: { category: string, tags: string[] }

// Full mode: includes gist, summary, key points
const fullResult = await categorizeContent(
  title,
  content,
  'full',
  context.config.anthropicApiKey as string,
  context.logger,
  (message) => onUpdate?.({ content: [{ type: 'text', text: message }], details: { phase: 'categorizing' } }),
);
// fullResult: { category, tags, gist, summary, keyPoints }
The categorization service:
  • Uses streamSimple + parseStreamingJson to stream the LLM response progressively
  • Fires onProgress as fields resolve: category → tags → gist (full mode)
  • Handles errors with safe defaults (fallback to ‘uncategorized’)
  • Respects content length limits (5000 chars for lightweight, 10000 for full)
  • Is safe to call without onProgress — callers that don’t need streaming omit the last argument
See Categorization for detailed documentation.

Existing plugins

PluginPackageDescription
YouTube@echos/plugin-youtubeTranscript extraction via Python + Whisper fallback
Article@echos/plugin-articleWeb article extraction via Readability + DOMPurify
Image@echos/plugin-imageImage storage with metadata extraction (Sharp)

Image Plugin

The image plugin (@echos/plugin-image) provides the save_image tool for storing and organizing images in the knowledge base. Features:
  • Download images from URLs or accept base64 data
  • Extract metadata: dimensions, format, file size, EXIF
  • Store original files in knowledge/image/{category}/
  • Create searchable markdown notes with image references
  • Optional AI categorization for automatic tagging
Tool: save_image
{
  imageUrl?: string;        // URL to download image from
  imageData?: string;       // Base64-encoded image data
  title?: string;           // Image title
  caption?: string;         // Description or context
  tags?: string[];          // Array of tags
  category?: string;        // Category (default: "photos")
  autoCategorize?: boolean; // Use AI to categorize (default: false)
  processingMode?: 'lightweight' | 'full'; // AI processing mode
}
Supported formats:
  • JPEG, PNG, GIF, WebP, AVIF, TIFF, BMP
  • Maximum size: 20MB
Processor (processImage):
  • Validates image format and size
  • Extracts metadata using Sharp library
  • Generates content-based filename hash
  • Returns structured metadata and buffer
Storage:
  • Original file: knowledge/image/{category}/{hash}.{ext}
  • Markdown note: knowledge/note/{category}/{date}-{slug}.md
  • Embedded reference: ![title](../../image/{category}/{filename})
Telegram Integration:
  • Automatic photo handler via bot.on('message:photo')
  • Downloads from Telegram API
  • Passes URL to save_image tool
  • Supports captions for context
See Images for complete documentation on image handling.