Setup Fixes & Configuration Changes
This document tracks configuration changes and fixes made to ensure the project runs correctly.April 2026 — MCP Server Core (Phase 11.01)
New env vars: ENABLE_MCP, MCP_PORT, MCP_API_KEY
A new MCP (Model Context Protocol) server is available. It is disabled by default.
ENABLE_MCP=true, EchOS starts an HTTP server on MCP_PORT (default 3939) that exposes seven knowledge tools via the Model Context Protocol. The server only binds to 127.0.0.1.
If MCP_API_KEY is unset, all requests are accepted without authentication (suitable for localhost-only use). If set, all requests must include Authorization: Bearer <MCP_API_KEY>.
March 2026 — SSHFS write access via Docker entrypoint ownership fix
Problem
SSHFS mounts of the knowledge directory were read-only. Docker was writing volume files as thenode user (UID 1000), which differed from the server SSH user’s UID on some VPS providers
(e.g. Oracle Cloud where the default ubuntu user is UID 1001).
Fix
A Docker entrypoint script (docker/entrypoint.sh) now runs as root on startup, detects any
ownership mismatch in the data directories, corrects it, then drops to the node user via
su-exec before starting the app. This self-heals on every container start with no manual steps.
If upgrading from an older version, re-own once and restart:
February 25, 2026 — Configurable Prompt Cache Retention
New env var: CACHE_RETENTION
pi-ai already supported a PI_CACHE_RETENTION env var for controlling Anthropic prompt cache TTL (checked inside resolveCacheRetention in the provider), defaulting to 'short' (5 min) unless PI_CACHE_RETENTION=long was set. EchOS now exposes this as a first-class Zod-validated config field, with a default of 'long' and automatic enforcement of 'none' for custom endpoints.
PI_CACHE_RETENTION approach:
- EchOS now defaults to
'long'instead of pi-ai’s'short' - When
LLM_BASE_URLis set (custom OpenAI-compatible endpoint),cacheRetentionis forced to'none'regardless of this env var — those endpoints do not support Anthropic-style prompt caching - The effective value is Zod-validated and logged at startup
packages/shared/src/config/index.ts— addedcacheRetentionZod enum field;CACHE_RETENTIONenv var mappingpackages/core/src/agent/index.ts—cacheRetention?inAgentDeps;effectiveCacheRetentioncomputed (forced'none'for non-Anthropic); passed tostreamSimple; logged at startupsrc/index.ts— passesconfig.cacheRetentionintoAgentDeps.env.example—CACHE_RETENTION=longadded under LLM settings
February 19, 2026 — Standalone CLI & Model Deprecation Fix
pnpm echos — standalone three-mode CLI
packages/cli/src/index.ts is a standalone CLI binary — no daemon required.
Three auto-detected modes:
| Invocation | Mode | Behaviour |
|---|---|---|
pnpm echos "query" | One-shot | Boots agent, answers, exits |
echo "msg" | pnpm echos | Pipe | Reads stdin, answers in plain text, exits |
pnpm echos (TTY) | Interactive REPL | Persistent session with history |
- History persisted to
~/.echos_history(max 500 entries, restored on next run) - Ctrl+C cancels in-flight response, re-prompts (does not exit)
- Tool calls shown in dim colour on TTY; plain text in pipe mode
exit/quit/ Ctrl+D exits cleanly
warn log level — startup logs are suppressed. Override with LOG_LEVEL=info pnpm echos.
No daemon required. Both pnpm start (daemon) and pnpm echos (CLI) read from the same ./data/ directory. SQLite WAL mode makes concurrent access safe.
Files changed:
packages/cli/src/index.ts— three-mode standalone CLIpackages/cli/package.json—"bin": { "echos": "./dist/index.js" }package.json—"echos": "tsx --env-file=.env packages/cli/src/index.ts"script
Default model updated — claude-3-5-haiku-20241022 → claude-haiku-4-5-20251001
claude-3-5-haiku-20241022 reached end-of-life on February 19, 2026. The Anthropic API returns empty responses for this model from today.
Files changed:
packages/shared/src/config/index.ts—defaultModeldefault updatedpackages/core/src/agent/index.ts— in-code fallback updated; type cast cleaned up toParameters<typeof getModel>[1]
DEFAULT_MODEL=claude-3-5-haiku-20241022 in .env: update it to claude-haiku-4-5-20251001.
February 19, 2026 — Web API Security & Experimental Interface Defaults
Web marked experimental, disabled by default
Web UI is now clearly marked as experimental in the setup wizard with a warning before the interface selection step. It defaults to off (it was already off in the config schema, but the wizard was pre-selecting Web UI if no existing.env was found).
New env var: WEB_API_KEY
All web API routes (except GET /health) now require Authorization: Bearer <WEB_API_KEY>.
.env.
If WEB_API_KEY is not set, the server starts but logs a warning and all routes are unauthenticated.
userId now validated against ALLOWED_USER_IDS in web routes
All /api/chat/* routes now verify the userId in the request body is in ALLOWED_USER_IDS. Unknown user IDs return 403 Forbidden. Previously the web API accepted any numeric userId.
Web server binds to 127.0.0.1 (was 0.0.0.0)
The Fastify server no longer listens on all interfaces. It is only reachable from localhost.
CORS restricted to localhost origins
origin: true (allow all) replaced with a function that only allows http(s)://localhost:* and http(s)://127.0.0.1:*.
Non-interactive wizard fix: enableWeb default
ENABLE_WEB in --non-interactive mode was defaulting to true unless explicitly set to 'false'. Fixed to false unless explicitly set to 'true', matching the config schema default.
Files changed:
packages/shared/src/config/index.ts—webApiKeyfield addedpackages/web/src/index.ts— auth hook, localhost bind, restricted CORSpackages/web/src/api/chat.ts—allowedUserIdsparam,403on unknown userIdscripts/setup.ts— experimental warning, fix defaults, generate/storeWEB_API_KEY
February 19, 2026 — Model Presets & Cross-Provider Handoffs
New env vars: MODEL_BALANCED, MODEL_DEEP
Configure named model presets for on-the-fly switching during a session:
provider/model-id for explicit provider:
| Preset | Default model |
|---|---|
fast | claude-3-5-haiku-20241022 |
balanced | claude-sonnet-4-5 |
deep | claude-opus-4-5 |
<thinking> tagged text for cross-provider compatibility):
- Telegram:
/model balanced - Web API:
POST /api/chat/model { "preset": "balanced", "userId": 123 }
packages/shared/src/config/index.ts—modelBalanced,modelDeepfieldspackages/core/src/agent/model-resolver.ts—resolveModel(spec),MODEL_PRESETS,ModelPresetpackages/core/src/agent/index.ts—modelPresets?inAgentDepspackages/core/src/index.ts— exports resolver + preset typessrc/index.ts— passes presets intoAgentDepspackages/telegram/src/index.ts—/modelcommandpackages/web/src/api/chat.ts—POST /api/chat/modelendpoint
February 19, 2026 — Configurable Thinking Level
New env var: THINKING_LEVEL
Controls the LLM reasoning depth. Defaults to off.
off | minimal | low | medium | high | xhigh
Note:Files changed:xhighis only supported by OpenAI GPT-5.2/5.3 and Anthropic Opus 4.6 (where it maps to adaptive effort “max”). Setting it on an unsupported model will produce an error. Usemediumorhighas safe upgrades for Claude Haiku/Sonnet.
packages/shared/src/config/index.ts— addedthinkingLevelZod enum fieldpackages/core/src/agent/index.ts—thinkingLevel?inAgentDeps; passed toAgentconstructor; logged at startupsrc/index.ts— passesconfig.thinkingLevelintoAgentDeps
February 19, 2026 — LLM Payload Debug Logging
New env var: LOG_LLM_PAYLOADS
Set LOG_LLM_PAYLOADS=true to log the raw request payload sent to the LLM provider before each API call. Logged at Pino debug level under the key payload.
packages/shared/src/config/index.ts— addedlogLlmPayloadsfieldpackages/core/src/agent/index.ts— wrapsagent.streamFnwithonPayloadhook when enabledsrc/index.ts— passesconfig.logLlmPayloadsintoAgentDeps
ANTHROPIC_API_KEY, Authorization headers) applies before any log output, so keys are not exposed even with this flag enabled. Do not enable in production unless actively debugging.
February 18, 2026 — Distribution & First-Run Setup Wizard
Setup Wizard (pnpm wizard / pnpm wizard:cli)
A new interactive setup wizard (scripts/setup.ts) replaces the manual .env editing workflow.
Usage:
- Checks Node 20+, pnpm 10+, disk space
- Detects existing
.env— offers update / replace / skip - Collects and validates Anthropic key (required), OpenAI key (optional), Telegram token
- Configures interfaces (Telegram, Web UI) and ports
- Configures Redis scheduler with cron schedules
- Shows masked summary before writing
- Writes
.env(mode 0600), backs up old.envas.env.backup.{timestamp} - Creates data directories
- Offers to run
pnpm buildif no dist found
- All keys entered via
password()(masked*) — never visible in terminal - Keys are NOT accepted as CLI arguments (would appear in
ps aux) .envwritten withchmod 0600immediately- Env file parsed with simple line-by-line reader — no
eval(), no shell interpolation - API validation uses
fetch()withAbortSignal.timeout(10000)— keys never logged
Config schema fix — telegramBotToken now optional
packages/shared/src/config/index.ts: telegramBotToken changed from z.string().min(1) to z.string().optional().
Why: Web-only deployments were blocked by a required TELEGRAM_BOT_TOKEN even when ENABLE_TELEGRAM=false. The token is still validated at runtime in src/index.ts before the Telegram adapter is created.
First-run detection in src/index.ts
src/index.ts now exits with a helpful message if .env is missing:
Docker improvements
depends_on.redis.required: false— EchOS starts without Redis when scheduler is disabled- Healthcheck added to
echosservice (HTTP GET/health) nginxandcertbotservices added under--profile nginxdocker/nginx.conf.templatecreated with SSE-compatible proxy config and Let’s Encrypt instructions
install.sh (VPS one-liner)
February 15, 2026 - Initial Setup Fixes
Issues Fixed
-
Workspace Package Resolution
- Problem: tsx couldn’t resolve
@echos/*workspace packages when runningsrc/index.ts - Solution: Added TypeScript path mappings to root
tsconfig.json - Files changed:
tsconfig.json - Details: Added
pathsconfiguration mapping all@echos/*packages to their source locations
- Problem: tsx couldn’t resolve
-
LanceDB Native Module Compatibility
- Problem: LanceDB 0.26.2 dropped support for Intel Macs (darwin-x64)
- Solution: Downgraded to LanceDB 0.22.3
- Files changed:
packages/core/package.json - Details: Changed
"@lancedb/lancedb": "^0.26.2"to"^0.22.3"
-
Environment File Loading
- Problem: Environment variables weren’t being loaded from
.envfile - Solution: Added
--env-fileflag to start script (Node 20.6+ feature) - Files changed:
package.json - Details: Changed
"start": "tsx src/index.ts"to"tsx --env-file=.env src/index.ts"
- Problem: Environment variables weren’t being loaded from
Configuration Changes
tsconfig.json (Root)
Added path mappings for workspace packages:packages/core/package.json
Changed LanceDB version:package.json (Root)
Updated start script:.env file loading via the --env-file flag, eliminating the need for dotenv packages.
Build Process
The correct startup sequence is:- Install dependencies:
pnpm install - Build all packages:
pnpm build(required on first run and after any package changes) - Configure environment: Edit
.envfile with API keys - Start application:
pnpm start
Known Issues
Telegram Bot Conflicts
Symptom:GrammyError: Call to 'getUpdates' failed! (409: Conflict)
Cause: Another instance of the bot is already running. Telegram allows only one instance to poll for updates.
Fix:
Documentation Updates
README.md: Added first-time setup notes and process management instructionsdocs/DEPLOYMENT.md: Added troubleshooting section and process management guidedocs/TROUBLESHOOTING.md: New comprehensive troubleshooting guidedocs/SETUP_FIXES.md: This file
Environment Variables
Required variables (must be set in.env):
TELEGRAM_BOT_TOKEN- From @BotFatherALLOWED_USER_IDS- Comma-separated Telegram user IDs- At least one LLM key:
ANTHROPIC_API_KEY(Anthropic) orLLM_API_KEY(other providers)
LLM_BASE_URL- Custom OpenAI-compatible endpoint; requiresLLM_API_KEYOPENAI_API_KEY- For embeddings and WhisperWHISPER_LANGUAGE- ISO-639-1 code (e.g.en) to pin Whisper language detection
.env.example for full list with defaults.
Verification Steps
To verify the setup is working:- Check Node version:
node --version(should be 20+) - Check pnpm version:
pnpm --version(should be 9+) - Check packages built:
ls packages/*/dist(should show compiled JS files) - Check Redis running:
redis-cli ping(should return PONG) - Check env file:
head .env(should show configured values, without secrets!) - Start application:
pnpm start(should start without errors)
Available Commands
pnpm start- Start daemon (Telegram + Web if enabled)pnpm start:web-only- Start only the Web API (port 3000)pnpm echos- Standalone CLI (interactive REPL, no daemon needed)pnpm echos "query"- One-shot CLI querypnpm dev- Watch mode for development (rebuilds on changes)pnpm test- Run all testspnpm build- Build all workspace packages
Platform-Specific Notes
macOS (Intel)
- Uses LanceDB 0.22.3 with darwin-x64 native bindings
- Run
pnpm install --forceif native module errors occur
macOS (Apple Silicon)
- Could use newer LanceDB versions if needed
- Current version (0.22.3) works on both architectures
Linux
- Should work with LanceDB 0.22.3 or newer
- Native bindings auto-detected by platform
Windows
- Not extensively tested but should work
- May need WSL for better compatibility
Future Improvements
- Consider using PM2 or similar for process management
- Add webhook support for Telegram (production recommended)
- Create systemd service file for Linux deployments
- Add health check endpoint for monitoring
- Document upgrade path for LanceDB when arm64 stability improves