Skip to main content

Troubleshooting Guide

Setup Wizard Issues

Wizard shows “No TTY detected” when using curl pipe

Problem: Running curl ... | bash without a terminal redirects stdin, so the interactive wizard cannot receive keyboard input. Solution:
# Option A: Run install.sh first, then the wizard manually
curl -sSL https://raw.githubusercontent.com/albinotonnina/echos/main/install.sh | bash
# Follow the printed instructions to run: cd ~/echos && pnpm wizard

# Option B: Download first, then run (preserves TTY)
curl -sSL https://raw.githubusercontent.com/albinotonnina/echos/main/install.sh -o /tmp/install-echos.sh
bash /tmp/install-echos.sh

API key validation fails but key is correct

Problem: The wizard rejects an API key during live validation even though the key works. Solutions:
  1. Check network connectivity — the wizard makes outbound HTTPS requests to api.anthropic.com, api.openai.com, api.telegram.org
  2. Use --skip-validation to bypass live checks: pnpm wizard --skip-validation
  3. Corporate proxies or firewalls may block API calls — configure HTTPS_PROXY env var before running wizard

”I have Claude Pro/Max subscription - can I use it for EchOS?”

Problem: Confusion about Anthropic subscription plans vs API access. Answer: ❌ No, subscription plans cannot be used for EchOS.
  • Claude Pro/Max subscriptions (2020-200/month) are for using Claude through Anthropic’s web, desktop, and mobile apps only
  • They do NOT provide API access for programmatic integration
  • EchOS requires a separate Anthropic API account with pay-as-you-go billing
  • API costs are typically much lower than subscriptions for automated use cases (often ~$5/month for typical personal use)
What you need:
  1. Sign up for Anthropic API access at https://console.anthropic.com/
  2. Add credits or set up billing
  3. Generate an API key under Settings → API Keys
  4. Use that key in EchOS .env file
See: the Anthropic pricing page for a comparison of subscription vs API plans.

Wizard exits immediately in --non-interactive mode

Problem: Missing required env vars. Required variables for non-interactive mode:
  • ANTHROPIC_API_KEY
  • ALLOWED_USER_IDS
ANTHROPIC_API_KEY=sk-ant-... ALLOWED_USER_IDS=123456789 pnpm wizard --non-interactive

.env file has wrong permissions

Problem: Other users on the system can read your API keys. Fix:
chmod 0600 .env
ls -la .env  # should show: -rw------- 1 you ...
The wizard sets 0600 automatically, but if you created .env manually, set it yourself.

”No .env file found. Run: pnpm wizard” on startup

Problem: pnpm start exits immediately with this message. Solution: Run the setup wizard:
pnpm wizard
Or for CI environments:
ANTHROPIC_API_KEY=... ALLOWED_USER_IDS=... pnpm wizard --non-interactive --skip-validation

Build and Installation Issues

Cannot find package ‘@echos/shared’ (or other @echos/* packages)

Problem: Workspace packages aren’t built or aren’t being resolved by tsx. Symptoms:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@echos/shared'
Solutions:
  1. Build all packages first:
    pnpm build
    
  2. If build fails, clean and rebuild:
    pnpm clean
    pnpm install
    pnpm build
    
  3. Verify path mappings: The root tsconfig.json includes these mappings for tsx:
    "paths": {
      "@echos/shared": ["./packages/shared/src/index.ts"],
      "@echos/core": ["./packages/core/src/index.ts"],
      // ... etc
    }
    

LanceDB Native Module Errors

Problem: LanceDB native bindings missing for your platform. Symptoms:
Error: Cannot find module '@lancedb/lancedb-darwin-x64'
Error: Cannot find module '@lancedb/lancedb-darwin-arm64'
Solutions:
  1. Intel Macs (darwin-x64):
    • The project is configured to use LanceDB 0.22.3 (last version with Intel Mac support)
    • This is set in packages/core/package.json
    • If you see this error, run: pnpm install --force
  2. Apple Silicon Macs (darwin-arm64):
    • Should work with LanceDB 0.26.2+
    • If issues persist, try: pnpm install --force
  3. Linux/Windows:
    • LanceDB should auto-install the correct native binding
    • Run: pnpm install --force if needed
  4. Check what’s installed:
    ls node_modules/.pnpm/@lancedb*/
    

Configuration Errors

Problem: Missing required environment variables. Symptoms:
Error: Invalid configuration:
  telegramBotToken: Required
  allowedUserIds: Required
  anthropicApiKey: Required
Solution:
  1. Copy the example file:
    cp .env.example .env
    
  2. Edit .env and fill in required values:
    TELEGRAM_BOT_TOKEN=your_token_from_botfather
    ALLOWED_USER_IDS=123456789,987654321
    ANTHROPIC_API_KEY=sk-ant-your-key-here
    
  3. Get your Telegram user ID:
    • Message @userinfobot on Telegram
    • Add your ID to ALLOWED_USER_IDS
  4. Verify the file loads (Node 20.6+):
    # The start script uses --env-file flag
    pnpm start
    

Runtime Issues

Telegram Bot Conflicts

Problem: Multiple bot instances trying to poll Telegram simultaneously. Symptoms:
GrammyError: Call to 'getUpdates' failed! (409: Conflict: 
terminated by other getUpdates request; make sure that only 
one bot instance is running)
Solution: Only one instance can poll Telegram updates at a time. The conflict can come from:
  • Local processes on your machine
  • Remote deployments (VPS, cloud instances, etc.)
  • Docker containers
  • Another developer’s machine using the same bot token

Step 1: Check local processes

ps aux | grep "tsx.*index.ts"
# Or more broadly
ps aux | grep echos
# If only the grep command appears, no local instances are running

Step 2: Check Docker containers

docker ps | grep echos
# If no output, no containers running

Step 3: Check bot webhook status

./scripts/check-telegram-bot.sh status
# Look for "url" field - should be empty "" for polling

Step 4: Clear webhook and pending updates

./scripts/check-telegram-bot.sh delete-webhook
# This drops pending updates and allows polling

Step 5: Check remote deployments

If you deployed to a remote server, check there:
# SSH to your server
ssh user@your-server

# Check for running processes
ps aux | grep echos

# Check Docker containers
docker ps

# Stop if found
docker compose down
# Or kill the process
pkill -f "tsx.*index.ts"

Step 6: Wait for timeout

If another instance was recently stopped, Telegram may still have an active long-polling connection (30 second timeout). Wait 30-60 seconds then try again.

Step 7: Stop all instances

If the conflict persists:
# Stop local processes
pkill -f "tsx.*index.ts"

# Stop Docker
docker compose down

# Clear webhook
./scripts/check-telegram-bot.sh delete-webhook

# Wait 60 seconds
sleep 60

# Restart
pnpm start
For Production: Use webhooks instead of long-polling to avoid conflicts:
# Set webhook URL (HTTPS required)
curl -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/setWebhook" \
  -d "url=https://your-domain.com/telegram/webhook" \
  -d "secret_token=your_random_secret_here"

# Verify webhook
curl "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/getWebhookInfo"
Configure in .env:
TELEGRAM_WEBHOOK_URL=https://your-domain.com/telegram/webhook
TELEGRAM_WEBHOOK_SECRET=your_random_secret_here
Note: Implementing webhook support requires code changes in packages/telegram/src/index.ts.

Redis Connection Errors

Problem: Cannot connect to Redis. Symptoms:
Error: connect ECONNREFUSED 127.0.0.1:6379
Solutions:
  1. Use the Redis management script (recommended):
    # Check status
    pnpm redis:status
    
    # Start Redis (auto-detects platform)
    pnpm redis:start
    
    # Verify connection
    pnpm redis:health
    
  2. Manual platform-specific start (if script fails):
    # macOS
    brew services start redis
    
    # Linux (systemd)
    sudo systemctl start redis
    
    # Docker
    docker run -d -p 6379:6379 --name echos-redis redis:7-alpine
    
  3. Check Redis URL in .env:
    REDIS_URL=redis://localhost:6379
    
  4. Verify Redis is running:
    redis-cli ping
    # Should return: PONG
    
Note: Redis is only required when ENABLE_SCHEDULER=true. For basic knowledge management, you can disable the scheduler.

Database/Storage Errors

Problem: SQLite or LanceDB initialization failures. Symptoms:
Error: unable to open database file
Error: ENOENT: no such file or directory
Solutions:
  1. Create data directories:
    mkdir -p data/db data/knowledge data/sessions
    
  2. Check permissions:
    ls -la data/
    # Ensure directories are writable
    
  3. Check paths in .env:
    KNOWLEDGE_DIR=./data/knowledge
    DB_PATH=./data/db
    SESSION_DIR=./data/sessions
    
  4. Clean state (CAUTION: deletes all data):
    rm -rf data/
    mkdir -p data/db data/knowledge data/sessions
    pnpm start
    

Development Issues

TypeScript Errors

Problem: Type errors when editing code. Solutions:
  1. Check all packages:
    pnpm typecheck
    
  2. Rebuild after package changes:
    pnpm build
    
  3. Use watch mode during development:
    pnpm dev
    

Vitest Test Failures

Problem: Tests failing or hanging. Solutions:
  1. Run tests:
    pnpm test
    
  2. Run specific test file:
    pnpm vitest packages/shared/src/security/url-validator.test.ts
    
  3. Watch mode:
    pnpm test:watch
    

Storage Sync Issues

Problem: You added a .md file directly to the knowledge/ directory (or edited one in an external editor), but the agent can’t find it via search. How sync works:
  • On startup, EchOS reconciles all markdown files with SQLite and LanceDB automatically
  • While running, a file watcher picks up any add, change, or unlink events in real time (debounced 500 ms)
If a file isn’t being found:
  1. Check the file has a valid id in frontmatter — files without an id field are silently skipped:
    ---
    id: some-unique-id
    type: note
    title: My Note
    created: 2026-02-17T12:00:00.000Z
    updated: 2026-02-17T12:00:00.000Z
    tags: []
    links: []
    category: uncategorized
    ---
    
  2. If you added the file while the app was stopped — just restart. The startup reconciler will pick it up:
    pnpm start
    
  3. If you added the file while the app is running — the file watcher should index it within ~1 second. If it doesn’t appear after a few seconds, check the logs:
    pnpm start | pnpm exec pino-pretty
    # Look for "File watcher: upserted" or "Reconciler:" log lines
    
  4. Enable debug logging to see all reconciler/watcher events:
    LOG_LEVEL=debug pnpm start
    

Search results show stale content after editing a note externally

Problem: You edited a markdown file in VS Code or Obsidian, but the agent still returns the old content. The content hash prevents unnecessary re-indexing, so re-indexing only happens when the body text of the note changes (not just frontmatter). If the watcher picked up the change but content looks stale, wait a moment for the debounce window (500 ms) then retry. If the issue persists after restarting the app, check that the file has a valid id in its frontmatter — files without an ID are skipped by both the reconciler and the watcher.

Memory Issues

Memory stored but not recalled after /reset

Problem: You told the agent to remember something, then after /reset it doesn’t know the fact. How memory works:
  • remember_about_me stores facts permanently in SQLite (survives /reset)
  • On every new session, the top 15 memories by confidence are injected into the system prompt automatically
  • Additional memories beyond the top 15 are searchable via recall_knowledge
If a memory isn’t being recalled:
  1. It may be beyond position 15 — ask explicitly: “recall what you know about X” to trigger recall_knowledge
  2. The search uses keyword matching — use related terms: “recall what you know about my birthday” or “recall birth year”
  3. Increase confidence when storing important facts: the agent can be told “remember this with high confidence”
To see all stored memories: ask “list everything you remember about me” — the agent will use recall_knowledge to retrieve all entries.

Content Status Issues

Article shows up in knowledge search even though I haven’t read it:
  • Articles saved via save_article start with status: saved (reading list), not status: read (knowledge)
  • If the agent is mixing them, remind it: “distinguish between saved articles and things I’ve actually read”
  • You can filter: “show only what I’ve actually read about X”
Reading list shows nothing:
  • Ask: “show my reading list” — the agent calls list_notes(status="saved")
  • Articles saved before this feature was introduced may have status: null — they won’t appear in filtered lists; use update_note or mark_content to set their status
Agent marks article as read when I didn’t ask:
  • This is intentional: when you begin actively discussing a saved article, the agent auto-marks it read
  • To prevent this, tell the agent you’re just asking about the topic in general, not discussing that specific article
save_conversation creates too much noise:
  • save_conversation is only called when you explicitly ask (“save this conversation” or “save what we discussed about X”)
  • It is never called automatically

CLI Issues (pnpm echos)

Empty response — agent runs but prints nothing

Most likely cause: The configured model is deprecated and the API returns an empty response. Check: Look for a deprecation warning in the output. If you see The model '...' is deprecated, update DEFAULT_MODEL in .env to a current model (e.g. claude-haiku-4-5-20251001). Debug: Run with logging enabled to see what happens:
LOG_LEVEL=info pnpm echos "hello"

Startup logs cluttering the output

By default the CLI suppresses all logs below warn. If you’re seeing INFO logs:
  1. Check whether LOG_LEVEL is exported in your shell: echo $LOG_LEVEL
  2. Unset it or set it to warn: unset LOG_LEVEL

Shell parse errors (zsh: parse error near '\n')

Special characters (?, >, *, !, &) in arguments are interpreted by the shell before echos sees them. Always quote arguments:
# Wrong — zsh interprets `?` and `>`
pnpm echos what notes do I have about TypeScript?

# Correct
pnpm echos "what notes do I have about TypeScript?"

History not persisting between sessions

The history file is ~/.echos_history. If it doesn’t persist:
  • Check write permissions: ls -la ~/.echos_history
  • The file is created on the first clean exit (exit / quit / Ctrl+D). Kills via kill -9 skip the save.

YouTube Transcript Issues

”Unable to save this video” / YouTube transcript fails

Problem: The YouTube plugin relies on a Python subprocess (youtube-transcript-api) to extract transcripts. If the Python package is missing, all YouTube saves fail silently and the agent reports inability to save. Symptoms:
  • Bot responds with something like “I was unable to save this video”
  • Logs show: Python process exited with code 1 or ModuleNotFoundError: No module named 'youtube_transcript_api'
Fix:
  1. Install the Python package (required on local machine):
    pip3 install youtube-transcript-api
    
    On macOS with externally managed Python (PEP 668):
    pip3 install youtube-transcript-api --break-system-packages
    
  2. Verify installation:
    python3 -c "from youtube_transcript_api import YouTubeTranscriptApi; print('OK')"
    
  3. Docker: The Dockerfile already installs the package. No action needed for Docker deployments.
Proxy for cloud deployments: YouTube may block requests from cloud IPs. Configure Webshare proxy in .env:
WEBSHARE_PROXY_USERNAME=your_username
WEBSHARE_PROXY_PASSWORD=your_password
When set, the Python transcript fetcher automatically routes through p.webshare.io:80.

Getting Help

If you’re still stuck:
  1. Check the logs: EchOS uses Pino for structured logging
    pnpm start | pnpm exec pino-pretty
    
  2. Enable debug logging: Set LOG_LEVEL=debug in .env
  3. Check system requirements:
    • Node.js 20+ (node --version)
    • pnpm 9+ (pnpm --version)
    • Redis running (redis-cli ping)
    • Enough disk space for embeddings/vectors
  4. Review security docs: See Security
  5. Architecture overview: See Architecture