HUSK

Configuration

husk.toml reference, environment variables, TTL, dedup, and backend options.

HUSK is configured via husk.toml, environment variables, or the admin UI. Precedence: environment variables > husk.toml > defaults.

The config file lives at ~/.husk/husk.toml (or set HUSK_CONFIG to a custom path). Use husk config to edit interactively — see CLI.

Server

TOML keyEnv varDefaultDescription
server.portHUSK_PORT3000HTTP server port
server.db_pathHUSK_DB_PATHdata/husk.dbSQLite database path
server.jwt_secretHUSK_JWT_SECRETauto-generatedJWT signing secret

Memory

Config keyEnv varDefaultDescription
memory_modeHUSK_MEMORY_MODEsimplesimple or full — controls observation capture
session_context_count5Number of recent session summaries injected on session start (1–20)

simple mode disables observation capture entirely — session hooks return empty responses. Use this if you only want manual remember / search without automatic session tracking.

full mode enables observation capture, session tracking, and compression. This is what you want for the full HUSK experience.

Storage

Vector storage backend for memory embeddings.

TOML keyEnv varDefaultDescription
storage.backendHUSK_STORAGEqdrantqdrant or sqlite-vec
storage.urlHUSK_STORAGE_URLhttp://localhost:6333Qdrant server URL
storage.pathHUSK_STORAGE_PATHdata/husk-vectors.dbsqlite-vec database path

sqlite-vec is embedded and requires no external service. Qdrant is a dedicated vector database — better for larger deployments.

Embeddings

TOML keyEnv varDefaultDescription
embeddings.backendHUSK_EMBEDDINGSollamaollama, transformers, openai, voyage, or llamacpp
embeddings.urlHUSK_EMBED_URLprovider-specificAPI/server URL
embeddings.modelHUSK_EMBED_MODELprovider-specificModel identifier
embeddings.api_keyHUSK_EMBED_API_KEYAPI key (OpenAI, Voyage)
embeddings.dimensionsHUSK_EMBED_DIMENSIONSprovider-specificVector dimensions (must match model)
embeddings.models_pathHUSK_EMBED_MODELS_PATHdata/modelsLocal model cache (transformers only)

Provider defaults

ProviderURLModelDimensionsAPI key
ollamahttp://localhost:11434nomic-embed-text768No
transformersXenova/all-MiniLM-L6-v2384No
openaihttps://api.openai.com/v1text-embedding-3-small1536Yes
voyagehttps://api.voyageai.com/v1voyage-3.51024Yes
llamacpphttp://localhost:8080/v1defaultprovider-determinedNo

Compression

Session compression summarizes observations into searchable memories.

TOML keyEnv varDefaultDescription
compression.providerHUSK_COMPRESSION_PROVIDERanthropicanthropic, openrouter, or ollama
compression.api_keyHUSK_COMPRESSION_API_KEYAPI key (Anthropic, OpenRouter)
compression.modelHUSK_COMPRESSION_MODELprovider-specificModel for summarization
compression.urlHUSK_COMPRESSION_URLprovider-specificBase URL (OpenRouter, Ollama)
compression.modeHUSK_COMPRESSION_MODEclientclient or server — see compression modes
compression.batch_sizeHUSK_COMPRESSION_BATCH_SIZE20Observations per batch before compressing (5–100)
compression.interval_minutesHUSK_COMPRESSION_INTERVAL_MINUTES15Minutes of inactivity before compressing stale sessions (5–60)

Compression providers

ProviderDefault modelURLAPI key
anthropicclaude-haiku-4-5-20251001Yes
openrouteranthropic/claude-haiku-4-5-20251001https://openrouter.ai/api/v1Yes
ollamallama3.2http://localhost:11434No

Compression modes

client (default) — the server is dumb storage. Plugins stream observations to the server throughout the session. When uncompressed observations hit batch_size, the plugin injects a prompt into the LLM conversation. The LLM reads observations via get_uncompressed_observations, writes a summary, and posts it back via compress_observations. No server-side LLM or API key needed — the client's own LLM does all summarization.

server — the server runs its own LLM to compress observations. Requires a compression provider config (Anthropic API key, OpenRouter, or Ollama). Triggers on: batch threshold reached, session ends, or no observations for interval_minutes.

Graph

Knowledge graph for linking memories with typed relationships. See Graph Tools for the MCP tools.

TOML keyEnv varDefaultDescription
graph.backendHUSK_GRAPHsqlitesqlite, neo4j, or none
graph.urlHUSK_GRAPH_URLbolt://localhost:7687Neo4j connection URL
graph.userHUSK_GRAPH_USERneo4jNeo4j username
graph.passwordHUSK_GRAPH_PASSWORDNeo4j password (required for neo4j)

sqlite is embedded in the main database. neo4j requires a running Neo4j instance. none disables the graph layer entirely — memory search still works, but relationship tools are unavailable.

Auth

TOML keyEnv varDefaultDescription
auth.github_client_idGITHUB_CLIENT_IDGitHub OAuth app client ID
auth.github_client_secretGITHUB_CLIENT_SECRETGitHub OAuth app client secret
auth.oauth_allowed_orgsOAUTH_ALLOWED_ORGSComma-separated GitHub orgs allowed to log in

TTL

Default TTLs per scope, configurable via environment variables or admin config:

Env varConfig keyDefaultDescription
HUSK_TTL_DEFAULT_SESSIONttl_default_session7776000 (90 days)Session memory TTL
HUSK_TTL_DEFAULT_PROJECTttl_default_project(none)Project memory TTL (forever by default)
HUSK_TTL_DEFAULT_WORKSPACEttl_default_workspace(none)Workspace memory TTL (forever by default)
HUSK_TTL_DEFAULT_GLOBALttl_default_global(none)Global memory TTL (forever by default)
HUSK_TTL_MAXttl_max(none)Hard ceiling for all TTLs

Dedup

Env varConfig keyDefaultDescription
HUSK_DEDUP_THRESHOLDdedup_threshold0.92Similarity threshold (0.5–1.0)

Higher = stricter (fewer duplicates detected). Lower = more aggressive dedup.

Admin config

Some settings can be managed through the admin UI at /settings. These are stored in SQLite and take precedence over environment variables when both are set.

Example husk.toml

[server]
port = 3000
db_path = "data/husk.db"

[storage]
backend = "sqlite-vec"
path = "data/husk-vectors.db"

[embeddings]
backend = "transformers"
dimensions = 384

[compression]
provider = "anthropic"
mode = "server"

[graph]
backend = "sqlite"

On this page