Troubleshooting Guide
Setup Wizard Issues
Wizard shows “No TTY detected” when using curl pipe
Problem: Running curl ... | bash without a terminal redirects stdin, so the interactive wizard cannot receive keyboard input.
Solution:
# Option A: Run install.sh first, then the wizard manually
curl -sSL https://raw.githubusercontent.com/albinotonnina/echos/main/install.sh | bash
# Follow the printed instructions to run: cd ~/echos && pnpm wizard
# Option B: Download first, then run (preserves TTY)
curl -sSL https://raw.githubusercontent.com/albinotonnina/echos/main/install.sh -o /tmp/install-echos.sh
bash /tmp/install-echos.sh
API key validation fails but key is correct
Problem: The wizard rejects an API key during live validation even though the key works.
Solutions:
- Check network connectivity — the wizard makes outbound HTTPS requests to api.anthropic.com, api.openai.com, api.telegram.org
- Use
--skip-validation to bypass live checks: pnpm wizard --skip-validation
- Corporate proxies or firewalls may block API calls — configure
HTTPS_PROXY env var before running wizard
”I have Claude Pro/Max subscription - can I use it for EchOS?”
Problem: Confusion about Anthropic subscription plans vs API access.
Answer: ❌ No, subscription plans cannot be used for EchOS.
- Claude Pro/Max subscriptions (20−200/month) are for using Claude through Anthropic’s web, desktop, and mobile apps only
- They do NOT provide API access for programmatic integration
- EchOS requires a separate Anthropic API account with pay-as-you-go billing
- API costs are typically much lower than subscriptions for automated use cases (often ~$5/month for typical personal use)
What you need:
- Sign up for Anthropic API access at https://console.anthropic.com/
- Add credits or set up billing
- Generate an API key under Settings → API Keys
- Use that key in EchOS
.env file
See: the Anthropic pricing page for a comparison of subscription vs API plans.
Problem: Missing required env vars.
Required variables for non-interactive mode:
ALLOWED_USER_IDS
- At least one LLM key:
ANTHROPIC_API_KEY or LLM_API_KEY
# Anthropic
ANTHROPIC_API_KEY=sk-ant-... ALLOWED_USER_IDS=123456789 pnpm wizard --non-interactive
# Other provider (e.g. Groq)
LLM_API_KEY=gsk_... DEFAULT_MODEL=groq/llama-3.3-70b-versatile ALLOWED_USER_IDS=123456789 pnpm wizard --non-interactive
.env file has wrong permissions
Problem: Other users on the system can read your API keys.
Fix:
chmod 0600 .env
ls -la .env # should show: -rw------- 1 you ...
The wizard sets 0600 automatically, but if you created .env manually, set it yourself.
”No .env file found. Run: pnpm wizard” on startup
Problem: pnpm start exits immediately with this message.
Solution: Run the setup wizard:
Or for CI environments:
ANTHROPIC_API_KEY=... ALLOWED_USER_IDS=... pnpm wizard --non-interactive --skip-validation
# or with a non-Anthropic provider:
LLM_API_KEY=... DEFAULT_MODEL=groq/llama-3.3-70b-versatile ALLOWED_USER_IDS=... pnpm wizard --non-interactive --skip-validation
Build and Installation Issues
Cannot find package ‘@echos/shared’ (or other @echos/* packages)
Problem: Workspace packages aren’t built or aren’t being resolved by tsx.
Symptoms:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@echos/shared'
Solutions:
-
Build all packages first:
-
If build fails, clean and rebuild:
pnpm clean
pnpm install
pnpm build
-
Verify path mappings: The root
tsconfig.json includes these mappings for tsx:
"paths": {
"@echos/shared": ["./packages/shared/src/index.ts"],
"@echos/core": ["./packages/core/src/index.ts"],
// ... etc
}
LanceDB Native Module Errors
Problem: LanceDB native bindings missing for your platform.
Symptoms:
Error: Cannot find module '@lancedb/lancedb-darwin-x64'
Error: Cannot find module '@lancedb/lancedb-darwin-arm64'
Solutions:
-
Intel Macs (darwin-x64):
- The project is configured to use LanceDB
0.22.3 (last version with Intel Mac support)
- This is set in
packages/core/package.json
- If you see this error, run:
pnpm install --force
-
Apple Silicon Macs (darwin-arm64):
- Should work with LanceDB
0.26.2+
- If issues persist, try:
pnpm install --force
-
Linux/Windows:
- LanceDB should auto-install the correct native binding
- Run:
pnpm install --force if needed
-
Check what’s installed:
ls node_modules/.pnpm/@lancedb*/
Configuration Errors
Problem: Missing required environment variables.
Symptoms:
Error: Invalid configuration:
telegramBotToken: Required
allowedUserIds: Required
anthropicApiKey: Required
Solution:
-
Copy the example file:
-
Edit
.env and fill in required values:
TELEGRAM_BOT_TOKEN=your_token_from_botfather
ALLOWED_USER_IDS=123456789,987654321
# Anthropic:
ANTHROPIC_API_KEY=sk-ant-your-key-here
# or another provider (e.g. Groq):
# LLM_API_KEY=gsk_...
# DEFAULT_MODEL=groq/llama-3.3-70b-versatile
-
Get your Telegram user ID:
- Message @userinfobot on Telegram
- Add your ID to
ALLOWED_USER_IDS
-
Verify the file loads (Node 20.6+):
# The start script uses --env-file flag
pnpm start
Runtime Issues
Telegram Bot Conflicts
Problem: Multiple bot instances trying to poll Telegram simultaneously.
Symptoms:
GrammyError: Call to 'getUpdates' failed! (409: Conflict:
terminated by other getUpdates request; make sure that only
one bot instance is running)
Solution:
Only one instance can poll Telegram updates at a time. The conflict can come from:
- Local processes on your machine
- Remote deployments (VPS, cloud instances, etc.)
- Docker containers
- Another developer’s machine using the same bot token
Step 1: Check local processes
ps aux | grep "tsx.*index.ts"
# Or more broadly
ps aux | grep echos
# If only the grep command appears, no local instances are running
Step 2: Check Docker containers
docker ps | grep echos
# If no output, no containers running
Step 3: Check bot webhook status
./scripts/check-telegram-bot.sh status
# Look for "url" field - should be empty "" for polling
Step 4: Clear webhook and pending updates
./scripts/check-telegram-bot.sh delete-webhook
# This drops pending updates and allows polling
Step 5: Check remote deployments
If you deployed to a remote server, check there:
# SSH to your server
ssh user@your-server
# Check for running processes
ps aux | grep echos
# Check Docker containers
docker ps
# Stop if found
docker compose down
# Or kill the process
pkill -f "tsx.*index.ts"
Step 6: Wait for timeout
If another instance was recently stopped, Telegram may still have an active long-polling connection (30 second timeout). Wait 30-60 seconds then try again.
Step 7: Stop all instances
If the conflict persists:
# Stop local processes
pkill -f "tsx.*index.ts"
# Stop Docker
docker compose down
# Clear webhook
./scripts/check-telegram-bot.sh delete-webhook
# Wait 60 seconds
sleep 60
# Restart
pnpm start
For Production: Use webhooks instead of long-polling to avoid conflicts:
# Set webhook URL (HTTPS required)
curl -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/setWebhook" \
-d "url=https://your-domain.com/telegram/webhook" \
-d "secret_token=your_random_secret_here"
# Verify webhook
curl "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/getWebhookInfo"
Configure in .env:
TELEGRAM_WEBHOOK_URL=https://your-domain.com/telegram/webhook
TELEGRAM_WEBHOOK_SECRET=your_random_secret_here
Note: Implementing webhook support requires code changes in packages/telegram/src/index.ts.
Redis Connection Errors
Problem: Cannot connect to Redis.
Symptoms:
Error: connect ECONNREFUSED 127.0.0.1:6379
Solutions:
-
Use the Redis management script (recommended):
# Check status
pnpm redis:status
# Start Redis (auto-detects platform)
pnpm redis:start
# Verify connection
pnpm redis:health
-
Manual platform-specific start (if script fails):
# macOS
brew services start redis
# Linux (systemd)
sudo systemctl start redis
# Docker
docker run -d -p 6379:6379 --name echos-redis redis:7-alpine
-
Check Redis URL in
.env:
REDIS_URL=redis://localhost:6379
-
Verify Redis is running:
redis-cli ping
# Should return: PONG
Note: Redis is required for EchOS. If Redis is not running at startup, EchOS will exit with an error.
Database/Storage Errors
Problem: SQLite or LanceDB initialization failures.
Symptoms:
Error: unable to open database file
Error: ENOENT: no such file or directory
Solutions:
-
Create data directories:
mkdir -p data/db data/knowledge data/sessions
-
Check permissions:
ls -la data/
# Ensure directories are writable
-
Check paths in
.env:
KNOWLEDGE_DIR=./data/knowledge
DB_PATH=./data/db
SESSION_DIR=./data/sessions
-
Clean state (CAUTION: deletes all data):
rm -rf data/
mkdir -p data/db data/knowledge data/sessions
pnpm start
Development Issues
TypeScript Errors
Problem: Type errors when editing code.
Solutions:
-
Check all packages:
-
Rebuild after package changes:
-
Use watch mode during development:
Vitest Test Failures
Problem: Tests failing or hanging.
Solutions:
-
Run tests:
-
Run specific test file:
pnpm vitest packages/shared/src/security/url-validator.test.ts
-
Watch mode:
Storage Sync Issues
Manually added/edited markdown file not appearing in search
Problem: You added a .md file directly to the knowledge/ directory (or edited one in an external editor), but the agent can’t find it via search.
How sync works:
- On startup, EchOS reconciles all markdown files with SQLite and LanceDB automatically
- While running, a file watcher picks up any
add, change, or unlink events in real time (debounced 500 ms)
If a file isn’t being found:
-
Check the file has a valid
id in frontmatter — files without an id field are silently skipped:
---
id: some-unique-id
type: note
title: My Note
created: 2026-02-17T12:00:00.000Z
updated: 2026-02-17T12:00:00.000Z
tags: []
links: []
category: uncategorized
---
-
If you added the file while the app was stopped — just restart. The startup reconciler will pick it up:
-
If you added the file while the app is running — the file watcher should index it within ~1 second. If it doesn’t appear after a few seconds, check the logs:
pnpm start | pnpm exec pino-pretty
# Look for "File watcher: upserted" or "Reconciler:" log lines
-
Enable debug logging to see all reconciler/watcher events:
LOG_LEVEL=debug pnpm start
Search results show stale content after editing a note externally
Problem: You edited a markdown file in VS Code or Obsidian, but the agent still returns the old content.
The content hash prevents unnecessary re-indexing, so re-indexing only happens when the body text of the note changes (not just frontmatter). If the watcher picked up the change but content looks stale, wait a moment for the debounce window (500 ms) then retry.
If the issue persists after restarting the app, check that the file has a valid id in its frontmatter — files without an ID are skipped by both the reconciler and the watcher.
Memory Issues
Memory stored but not recalled after /reset
Problem: You told the agent to remember something, then after /reset it doesn’t know the fact.
How memory works:
remember_about_me stores facts permanently in SQLite (survives /reset)
- On every new session, the top 15 memories by confidence are injected into the system prompt automatically
- Additional memories beyond the top 15 are searchable via
recall_knowledge
If a memory isn’t being recalled:
- It may be beyond position 15 — ask explicitly: “recall what you know about X” to trigger
recall_knowledge
- The search uses keyword matching — use related terms: “recall what you know about my birthday” or “recall birth year”
- Increase confidence when storing important facts: the agent can be told “remember this with high confidence”
To see all stored memories: ask “list everything you remember about me” — the agent will use recall_knowledge to retrieve all entries.
Content Status Issues
Article shows up in knowledge search even though I haven’t read it:
- Articles saved via
save_article start with status: saved (reading list), not status: read (knowledge)
- If the agent is mixing them, remind it: “distinguish between saved articles and things I’ve actually read”
- You can filter: “show only what I’ve actually read about X”
Reading list shows nothing:
- Ask: “show my reading list” — the agent calls
list_notes(status="saved")
- Articles saved before this feature was introduced may have
status: null — they won’t appear in filtered lists; use update_note or mark_content to set their status
Agent marks article as read when I didn’t ask:
- This is intentional: when you begin actively discussing a saved article, the agent auto-marks it
read
- To prevent this, tell the agent you’re just asking about the topic in general, not discussing that specific article
save_conversation creates too much noise:
save_conversation is only called when you explicitly ask (“save this conversation” or “save what we discussed about X”)
- It is never called automatically
CLI Issues (pnpm echos)
Empty response — agent runs but prints nothing
Most likely cause: The configured model is deprecated and the API returns an empty response.
Check: Look for a deprecation warning in the output. If you see The model '...' is deprecated, update DEFAULT_MODEL in .env to a current model (e.g. claude-haiku-4-5-20251001).
Debug: Run with logging enabled to see what happens:
LOG_LEVEL=info pnpm echos "hello"
Startup logs cluttering the output
By default the CLI suppresses all logs below warn. If you’re seeing INFO logs:
- Check whether
LOG_LEVEL is exported in your shell: echo $LOG_LEVEL
- Unset it or set it to
warn: unset LOG_LEVEL
Shell parse errors (zsh: parse error near '\n')
Special characters (?, >, *, !, &) in arguments are interpreted by the shell before echos sees them. Always quote arguments:
# Wrong — zsh interprets `?` and `>`
pnpm echos what notes do I have about TypeScript?
# Correct
pnpm echos "what notes do I have about TypeScript?"
History not persisting between sessions
The history file is ~/.echos_history. If it doesn’t persist:
- Check write permissions:
ls -la ~/.echos_history
- The file is created on the first clean exit (
exit / quit / Ctrl+D). Kills via kill -9 skip the save.
YouTube Transcript Issues
”Unable to save this video” / YouTube transcript fails
Problem: The YouTube plugin uses youtube-transcript-api-js (pure JS) to fetch transcripts. Failures are usually caused by the server IP being blocked by YouTube.
Symptoms:
- Bot responds with “I was unable to save this video”
- Logs show:
YouTube transcript unavailable [IpBlocked] or YouTube transcript unavailable [RequestBlocked]
Fixes:
-
Local machine: No setup required — home IPs are not blocked by YouTube.
-
Docker / VPS: The Docker image includes all required JS dependencies. No extra steps needed.
Proxy for cloud deployments:
YouTube blocks most cloud/datacenter IPs. Configure a Webshare rotating proxy in .env:
WEBSHARE_PROXY_USERNAME=your_username
WEBSHARE_PROXY_PASSWORD=your_password
Use your base Webshare username — without the -rotate suffix. The app appends -rotate automatically to enable IP rotation. If you copy the username directly from the Webshare dashboard and it already ends in -rotate, remove that suffix before pasting it here.
When set, transcript requests and audio downloads route through p.webshare.io:80.
Problem: The Twitter plugin uses the free FxTwitter API (api.fxtwitter.com). If the tweet is unavailable or the API is temporarily down, the save will fail.
Symptoms:
- Agent responds with “I wasn’t able to save that tweet” or similar
- Logs show:
save_tweet failed with an error message
Fixes:
-
Check the URL format — the plugin accepts these formats:
https://twitter.com/<user>/status/<id>
https://x.com/<user>/status/<id>
https://mobile.twitter.com/<user>/status/<id>
https://fxtwitter.com/<user>/status/<id> / https://vxtwitter.com/<user>/status/<id>
-
Check that the tweet exists and is public — private/protected accounts and deleted tweets cannot be fetched.
-
Enable debug logging to see the full error from the FxTwitter API:
LOG_LEVEL=debug pnpm echos "save this tweet: https://x.com/..."
-
FxTwitter rate limiting — the FxTwitter API is a free public service and may occasionally rate-limit requests. Wait a moment and try again.
Problem: The agent creates a generic note when given a Twitter URL instead of calling save_tweet.
Fix: The system prompt includes explicit URL routing. If this happens, it may indicate the system prompt wasn’t applied to the session. Try resetting the session:
- Telegram: Send
/reset
- CLI: Exit and restart (
exit or Ctrl+D)
The routing rules in the system prompt direct twitter.com and x.com URLs to save_tweet, not create_note.
Getting Help
If you’re still stuck:
-
Check the logs: EchOS uses Pino for structured logging
pnpm start | pnpm exec pino-pretty
-
Enable debug logging: Set
LOG_LEVEL=debug in .env
-
Check system requirements:
- Node.js 20+ (
node --version)
- pnpm 9+ (
pnpm --version)
- Redis running (
redis-cli ping)
- Enough disk space for embeddings/vectors
-
Review security docs: See Security
-
Architecture overview: See Architecture