Programming Neam
📖 10 min read

Appendix B: Configuration Reference #


This appendix provides a complete reference for the neam.toml configuration file. The neam.toml file lives in the root of your Neam project directory and configures the compiler, runtime, LLM gateway, telemetry, deployment targets, and secrets management.

All configuration sections are optional. A project with no neam.toml file uses default values for everything. Configuration is read at compile time by neamc and at runtime by the VM. Changes to neam.toml require recompilation for deployment targets but take effect immediately for runtime settings when using neam-cli with hot reload.


B.1 [project] #

Project metadata. Used by the compiler, package manager, and deployment generators.

toml
[project]
name = "my-agent-system"
version = "1.0.0"
entry = "src/main.neam"
description = "A multi-agent customer service platform"
license = "MIT"
authors = ["Alice Smith <alice@example.com>"]
repository = "https://github.com/example/my-agent-system"
Key Type Default Description
name string directory name Project name. Used in deployment manifests and package registry. Must be lowercase alphanumeric with hyphens.
version string "0.1.0" Semantic version (MAJOR.MINOR.PATCH).
entry string "main.neam" Entry point file. Relative to project root.
description string "" Short description for package registry.
license string "" SPDX license identifier.
authors list of strings [] Author names and emails.
repository string "" Source code repository URL.

B.2 [state] #

Configures the state backend for agent memory, learning data, prompt evolution history, and distributed locks. The state backend persists all agent cognitive data and conversation history.

toml
[state]
backend = "postgres"
connection-string = "postgresql://user:pass@host:5432/neam_db"
ttl = "7d"
prefix = "myapp"
Key Type Default Description
backend string "sqlite" Backend type: "sqlite", "postgres", "redis", "dynamodb", "cosmosdb".
connection-string string "" Connection string for the backend. For SQLite, this is the file path (default: ~/.neam/memory.db). Supports ${ENV_VAR} interpolation.
ttl string "30d" Time-to-live for stored data. Format: "Ns" (seconds), "Nm" (minutes), "Nh" (hours), "Nd" (days). Set to "0" to disable expiration.
prefix string "neam" Key prefix for all stored data. Prevents collisions when multiple applications share a backend.

Backend-Specific Connection Strings #

SQLite:

toml
connection-string = "./data/memory.db"
# Or absolute path:
connection-string = "/var/lib/neam/memory.db"

PostgreSQL:

toml
connection-string = "postgresql://user:pass@host:5432/dbname?sslmode=require"
# With environment variable:
connection-string = "${DATABASE_URL}"

Redis:

toml
connection-string = "redis://user:pass@host:6379/0"
# Redis Sentinel:
connection-string = "redis-sentinel://host1:26379,host2:26379/mymaster/0"

DynamoDB:

toml
connection-string = "dynamodb://us-east-1/neam-state-table"
# Uses AWS credentials from environment (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
# or IAM role when running on AWS.

CosmosDB:

toml
connection-string = "${COSMOS_CONNECTION_STRING}"
# Format: AccountEndpoint=https://account.documents.azure.com:443/;AccountKey=...;

Firestore:

toml
connection-string = "firestore://my-project-id/neam-state"
# Uses Application Default Credentials (ADC) for authentication.
# Collection name is derived from the path (neam-state).
# Each data type (memory, learning, evolution) uses a separate sub-collection.
📝 Note

PostgreSQL and Redis backends require the corresponding compile flags (-DNEAM_BACKEND_POSTGRES=ON, -DNEAM_BACKEND_REDIS=ON) when building Neam from source. DynamoDB requires -DNEAM_BACKEND_AWS=ON, CosmosDB requires -DNEAM_BACKEND_AZURE=ON, and Firestore requires -DNEAM_BACKEND_GCP=ON.


B.3 [llm] #

Configures the LLM gateway: default provider, rate limiting, circuit breaking, response caching, and cost tracking.

toml
[llm]
default-provider = "openai"
default-model = "gpt-4o"
Key Type Default Description
default-provider string "" Default LLM provider for agents that do not specify one.
default-model string "" Default model for agents that do not specify one.

[llm.rate-limits.*] #

Per-provider rate limiting using a token bucket algorithm.

toml
[llm.rate-limits.openai]
requests-per-minute = 500

[llm.rate-limits.anthropic]
requests-per-minute = 200

[llm.rate-limits.gemini]
requests-per-minute = 300

[llm.rate-limits.ollama]
requests-per-minute = 0  # 0 = unlimited (local)
Key Type Default Description
requests-per-minute number 60 Maximum requests per minute to this provider. Uses token bucket with burst = RPM. Set to 0 for unlimited.

[llm.circuit-breaker] #

Circuit breaker configuration for LLM provider calls. Prevents cascading failures when a provider is experiencing issues.

toml
[llm.circuit-breaker]
failure-threshold = 5
reset-timeout = "30s"
half-open-max = 2
Key Type Default Description
failure-threshold number 5 Number of consecutive failures before the circuit opens.
reset-timeout string "30s" How long the circuit stays open before transitioning to half-open. Format: "Ns", "Nm", "Nh".
half-open-max number 2 Maximum requests allowed in half-open state before deciding to close or re-open the circuit.

Circuit breaker states: - Closed: Normal operation. All requests pass through. - Open: After failure-threshold consecutive failures. All requests immediately fail with a circuit-open error. - Half-Open: After reset-timeout elapses. Up to half-open-max requests are allowed through. If they succeed, the circuit closes. If any fail, the circuit re-opens.


[llm.cache] #

Response caching for deterministic LLM calls. Only caches responses when temperature == 0.0.

toml
[llm.cache]
enabled = true
max-entries = 10000
ttl = "1h"
Key Type Default Description
enabled bool false Enable response caching.
max-entries number 1000 Maximum number of cached responses. Uses LRU eviction.
ttl string "1h" Time-to-live for cached entries.

Cache key: SHA-256 hash of provider + model + serialized_messages. Only exact matches are served from cache.


[llm.cost] #

Daily cost tracking and budget alerts.

toml
[llm.cost]
daily-budget-usd = 100.0
Key Type Default Description
daily-budget-usd number 0.0 Daily cost limit in USD. When exceeded, a warning is logged. Set to 0 to disable. This is a soft limit (warning only, not a hard block).

[llm.fallback] #

Provider fallback chains. When the primary provider fails and the circuit breaker opens, the gateway automatically routes requests to the next provider in the chain.

toml
[llm.fallback]
chain = ["openai", "anthropic", "gemini"]
Key Type Default Description
chain list of strings [] Ordered list of provider names. The first provider is primary; subsequent providers are tried in order when the primary fails.

When a fallback is triggered, the gateway maps the original model to an equivalent model on the fallback provider (e.g., gpt-4o maps to claude-sonnet-4-20250514 on Anthropic). You can override these mappings:

toml
[llm.fallback.model-map]
"gpt-4o" = { anthropic = "claude-sonnet-4-20250514", gemini = "gemini-2.0-flash" }
"gpt-4o-mini" = { anthropic = "claude-haiku-4-20250414", gemini = "gemini-2.0-flash-lite" }

[llm.retry] #

Retry configuration for failed LLM requests. Retries happen before circuit breaker evaluation and before fallback.

toml
[llm.retry]
max-attempts = 3
initial-delay = "500ms"
max-delay = "10s"
multiplier = 2.0
retryable-status-codes = [429, 500, 502, 503]
Key Type Default Description
max-attempts number 3 Maximum number of retry attempts per request.
initial-delay string "500ms" Delay before the first retry. Format: "Nms", "Ns".
max-delay string "10s" Maximum delay between retries (caps exponential growth).
multiplier number 2.0 Exponential backoff multiplier. Each retry waits previous_delay * multiplier.
retryable-status-codes list of numbers [429, 500, 502, 503] HTTP status codes that trigger a retry.

[llm.logging] #

Controls LLM request and response logging for debugging and audit purposes.

toml
[llm.logging]
log-requests = false
log-responses = false
log-level = "info"
redact-keys = true
Key Type Default Description
log-requests bool false Log outgoing LLM request payloads.
log-responses bool false Log incoming LLM response payloads.
log-level string "info" Minimum log level: "debug", "info", "warn", "error".
redact-keys bool true Redact API keys and tokens in log output.

B.4 [telemetry] #

OpenTelemetry export configuration. When enabled, the VM exports traces and metrics to an OTLP-compatible collector.

toml
[telemetry]
enabled = true
endpoint = "http://otel-collector.monitoring:4318"
service-name = "my-agent-system"
sampling-rate = 0.1
Key Type Default Description
enabled bool false Enable OpenTelemetry export.
endpoint string "" OTLP/HTTP endpoint URL. Must include the port.
service-name string project name The service.name resource attribute in exported telemetry.
sampling-rate number 1.0 Sampling rate from 0.0 (none) to 1.0 (all). Uses deterministic hash-based sampling for consistency within a trace.

Exported Spans #

Span Name Description
neam.agent.ask Each LLM call via .ask()
neam.agent.autonomous_action Each scheduled autonomous execution
neam.tool.execute Each tool invocation
neam.agent.reflect Each reflection evaluation
neam.agent.learn Each learning review
neam.agent.evolve Each prompt evolution
neam.runner.run Each runner execution
neam.guardrail.check Each guardrail evaluation
neam.rag.retrieve Each RAG retrieval
neam.voice.transcribe Each STT call
neam.voice.synthesize Each TTS call

B.5 [secrets] #

Configures how the runtime resolves secret values (API keys, database passwords).

toml
[secrets]
provider = "env"
Key Type Default Description
provider string "env" Secret resolution strategy: "env" (environment variables), "aws-secrets-manager", "gcp-secret-manager", "azure-key-vault", "file".

Provider-Specific Configuration #

Environment Variables (default):

toml
[secrets]
provider = "env"
# Reads from environment variables. No additional configuration needed.
# Agent api_key_env fields reference env var names directly.

AWS Secrets Manager:

toml
[secrets]
provider = "aws-secrets-manager"

[secrets.aws]
region = "us-east-1"
secret-name = "neam/production"
# Reads keys from the specified secret in AWS Secrets Manager.
# Each key in the JSON secret maps to an environment variable name.

GCP Secret Manager:

toml
[secrets]
provider = "gcp-secret-manager"

[secrets.gcp]
project = "my-project-id"
# Reads from GCP Secret Manager. Secret names map to env var names.

Azure Key Vault:

toml
[secrets]
provider = "azure-key-vault"

[secrets.azure]
vault-url = "https://my-vault.vault.azure.net/"
# Reads from Azure Key Vault. Secret names map to env var names.

File:

toml
[secrets]
provider = "file"

[secrets.file]
path = "./.env"
# Reads from a dotenv-format file. One KEY=VALUE per line.

B.6 [deploy.aws] #

AWS deployment configuration shared across Lambda and ECS targets.

toml
[deploy.aws]
region = "us-east-1"
account-id = "123456789012"
Key Type Default Description
region string "us-east-1" AWS region for deployment.
account-id string required AWS account ID. Used for ECR image URIs and IAM role ARNs.

[deploy.aws.lambda] #

AWS Lambda deployment target. Generates a SAM template.

toml
[deploy.aws.lambda]
function-name = "my-agent-function"
memory = 512
timeout = 30
runtime = "provided.al2023"
Key Type Default Description
function-name string project name Lambda function name.
memory number 256 Memory allocation in MB (128-10240).
timeout number 30 Function timeout in seconds (1-900).
runtime string "provided.al2023" Lambda runtime. Neam uses custom runtime.

[deploy.aws.ecs] #

AWS ECS Fargate deployment target.

toml
[deploy.aws.ecs]
cluster = "my-cluster"
service = "my-service"
task-family = "my-task"
cpu = "512"
memory = "1024"
desired-count = 2
Key Type Default Description
cluster string "neam-cluster" ECS cluster name.
service string project name ECS service name.
task-family string project name ECS task definition family name.
cpu string "256" CPU units ("256", "512", "1024", "2048", "4096").
memory string "512" Memory in MB. Must be compatible with CPU units.
desired-count number 1 Number of task instances to run.

B.7 [deploy.gcp] #

GCP deployment configuration.

toml
[deploy.gcp]
project = "my-project-id"
region = "us-central1"
Key Type Default Description
project string required GCP project ID.
region string "us-central1" GCP region for deployment.

[deploy.gcp.cloud-run] #

GCP Cloud Run deployment target.

toml
[deploy.gcp.cloud-run]
service = "my-agent-service"
max-instances = 10
memory = "512Mi"
cpu = "1"
Key Type Default Description
service string project name Cloud Run service name.
max-instances number 10 Maximum number of instances.
memory string "256Mi" Memory limit per instance.
cpu string "1" CPU allocation per instance.

B.8 [deploy.azure] #

Azure deployment configuration shared across deployment targets.

toml
[deploy.azure]
subscription = "12345678-abcd-efgh-ijkl-123456789012"
resource-group = "my-resource-group"
location = "eastus"
Key Type Default Description
subscription string required Azure subscription ID.
resource-group string required Azure resource group name.
location string "eastus" Azure region.

[deploy.azure.container-apps] #

Azure Container Apps deployment target.

toml
[deploy.azure.container-apps]
name = "my-agent-app"
environment = "my-container-env"
min-replicas = 1
max-replicas = 5
Key Type Default Description
name string project name Container App name.
environment string required Container Apps Environment name.
min-replicas number 0 Minimum number of replicas. Set to 0 for scale-to-zero.
max-replicas number 10 Maximum number of replicas.

[deploy.azure.aks] #

Azure Kubernetes Service deployment target.

toml
[deploy.azure.aks]
cluster = "my-aks-cluster"
namespace = "production"
Key Type Default Description
cluster string required AKS cluster name.
namespace string "default" Kubernetes namespace for deployment.

B.9 [deploy.kubernetes.scaling] #

Horizontal Pod Autoscaler (HPA) configuration for Kubernetes deployments. Applies to any Kubernetes-based target (vanilla Kubernetes, AKS, EKS, GKE).

toml
[deploy.kubernetes.scaling]
min-replicas = 2
max-replicas = 10
target-cpu-pct = 70
Key Type Default Description
min-replicas number 1 Minimum number of pod replicas.
max-replicas number 5 Maximum number of pod replicas.
target-cpu-pct number 80 Target CPU utilization percentage for autoscaling.

B.10 [deploy.kubernetes.persistence] #

Persistent Volume Claim (PVC) configuration for Kubernetes deployments. Required when using SQLite as the state backend in Kubernetes (not recommended for production -- use PostgreSQL or Redis instead).

toml
[deploy.kubernetes.persistence]
enabled = true
storage-class = "gp3"
size = "10Gi"
Key Type Default Description
enabled bool false Enable PVC for state persistence.
storage-class string "standard" Kubernetes storage class name.
size string "1Gi" Volume size.

B.11 [deploy.kubernetes.network] #

Network policy and ingress configuration for Kubernetes deployments.

toml
[deploy.kubernetes.network]
ingress-enabled = true
allowed-namespaces = ["frontend", "api-gateway"]
Key Type Default Description
ingress-enabled bool false Generate an Ingress resource.
allowed-namespaces list of strings [] Namespaces allowed to access this service via NetworkPolicy. Empty = no NetworkPolicy generated.

B.12 [deploy.kubernetes.disruption] #

Pod Disruption Budget (PDB) configuration for Kubernetes deployments. Ensures a minimum number of pods remain available during node drains and cluster upgrades.

toml
[deploy.kubernetes.disruption]
min-available = 1
Key Type Default Description
min-available number 1 Minimum number of pods that must be available during voluntary disruptions.

B.13 [deploy.docker] #

Docker image configuration used by all container-based deployment targets.

toml
[deploy.docker]
registry = "ghcr.io/myorg"
image = "my-agent-system"
tag-format = "v{version}-{git-sha}"
Key Type Default Description
registry string "" Container registry URL.
image string project name Image name.
tag-format string "{version}" Image tag template. Supports: {version} (from [project]), {git-sha} (short commit hash), {timestamp} (Unix epoch).

B.14 [neamclaw] #

Configures NeamClaw agent defaults for claw agents, forge agents, channels, and semantic memory. All keys are optional. Per-agent declarations override these project-wide defaults.


[neamclaw.claw] #

Default settings for claw agents (persistent conversational agents).

toml
[neamclaw.claw]
max-history-turns = 200
idle-reset = "24h"
daily-reset = "04:00"
compaction-threshold = 0.8
compaction-ratio = 0.67
Key Type Default Description
max-history-turns number 200 Maximum conversation turns stored before compaction is forced.
idle-reset string "24h" Duration of inactivity before a session is automatically reset. Format: "Nh", "Nd".
daily-reset string "" Time of day (HH:MM, 24-hour) to reset all idle sessions. Empty = disabled.
compaction-threshold number 0.8 Context usage fraction (0.0-1.0) that triggers auto-compaction.
compaction-ratio number 0.67 Fraction of oldest turns to summarize during compaction.

[neamclaw.forge] #

Default settings for forge agents (iterative build agents).

toml
[neamclaw.forge]
max-iterations = 20
max-cost-usd = 5.0
max-tokens = 500000
checkpoint = "git"
workspace-dir = "./workspace"
Key Type Default Description
max-iterations number 10 Default maximum iterations for forge agent loops.
max-cost-usd number 5.0 Default cost budget in USD per forge run.
max-tokens number 500000 Default token budget per forge run.
checkpoint string "git" Default checkpoint strategy: "git", "snapshot", "none".
workspace-dir string "./workspace" Default workspace root directory for forge agents.

[neamclaw.channels] #

Default channel configuration for claw agents.

toml
[neamclaw.channels]
default-adapter = "cli"
http-port = 8090
max-message-size = 4096
dm-policy = "allow_all"
group-policy = "read_only"
Key Type Default Description
default-adapter string "cli" Default channel adapter: "cli", "http".
http-port number 8090 Port for the HTTP channel adapter.
max-message-size number 4096 Maximum inbound message size in bytes.
dm-policy string "allow_all" Default DM policy: "allow_all", "allowlist".
group-policy string "read_only" Default group policy: "allow_all", "read_only", "deny".

[neamclaw.memory] #

Default semantic memory configuration for workspace indexing and search.

toml
[neamclaw.memory]
backend = "sqlite"
search = "hybrid"
vector-weight = 0.7
keyword-weight = 0.3
chunk-size = 512
reindex-debounce = "5s"
Key Type Default Description
backend string "sqlite" Memory index backend: "sqlite", "local", "none".
search string "hybrid" Search strategy: "hybrid", "vector", "keyword", "none".
vector-weight number 0.7 Weight for vector similarity in hybrid search (0.0-1.0).
keyword-weight number 0.3 Weight for BM25 keyword matching in hybrid search (0.0-1.0).
chunk-size number 512 Token count per chunk when indexing workspace .md files.
reindex-debounce string "5s" Minimum interval between re-indexing after file changes.

B.15 Complete Example #

Here is a complete neam.toml file demonstrating all sections:

toml
[project]
name = "shopco-platform"
version = "2.1.0"
entry = "src/main.neam"
description = "ShopCo AI-powered customer service platform"
license = "MIT"
authors = ["Platform Team <platform@shopco.com>"]

[state]
backend = "postgres"
connection-string = "${DATABASE_URL}"
ttl = "7d"
prefix = "shopco"

[llm]
default-provider = "openai"
default-model = "gpt-4o"

[llm.rate-limits.openai]
requests-per-minute = 500

[llm.rate-limits.anthropic]
requests-per-minute = 200

[llm.circuit-breaker]
failure-threshold = 5
reset-timeout = "30s"
half-open-max = 2

[llm.cache]
enabled = true
max-entries = 10000
ttl = "1h"

[llm.fallback]
chain = ["openai", "anthropic"]

[llm.retry]
max-attempts = 3
initial-delay = "500ms"
max-delay = "10s"
multiplier = 2.0

[llm.cost]
daily-budget-usd = 100.0

[telemetry]
enabled = true
endpoint = "http://otel-collector.monitoring:4318"
service-name = "shopco-platform"
sampling-rate = 0.1

[secrets]
provider = "aws-secrets-manager"

[secrets.aws]
region = "us-east-1"
secret-name = "shopco/production"

[deploy.docker]
registry = "123456789012.dkr.ecr.us-east-1.amazonaws.com"
image = "shopco-platform"
tag-format = "v{version}-{git-sha}"

[deploy.aws]
region = "us-east-1"
account-id = "123456789012"

[deploy.aws.ecs]
cluster = "shopco-prod"
service = "customer-service"
task-family = "shopco-agents"
cpu = "1024"
memory = "2048"
desired-count = 3

[deploy.gcp]
project = "shopco-backup"
region = "us-central1"

[deploy.gcp.cloud-run]
service = "shopco-backup"
max-instances = 2
memory = "1Gi"
cpu = "1"

[deploy.kubernetes.scaling]
min-replicas = 3
max-replicas = 15
target-cpu-pct = 70

[deploy.kubernetes.persistence]
enabled = false

[deploy.kubernetes.network]
ingress-enabled = true
allowed-namespaces = ["frontend", "api-gateway", "monitoring"]

[deploy.kubernetes.disruption]
min-available = 2

[neamclaw.claw]
max-history-turns = 200
idle-reset = "24h"
compaction-threshold = 0.8

[neamclaw.forge]
max-iterations = 20
max-cost-usd = 10.0
checkpoint = "git"
workspace-dir = "./workspace"

[neamclaw.channels]
default-adapter = "http"
http-port = 8090

[neamclaw.memory]
backend = "sqlite"
search = "hybrid"

B.16 Environment Variable Interpolation #

Any string value in neam.toml can reference environment variables using the ${VAR_NAME} syntax:

toml
[state]
connection-string = "${DATABASE_URL}"

[deploy.aws]
account-id = "${AWS_ACCOUNT_ID}"

Environment variables are resolved at runtime. If a referenced variable is not set, the VM raises an error at startup with a clear message indicating which variable is missing and which configuration field references it.

You can also provide default values using the ${VAR_NAME:-default} syntax:

toml
[state]
backend = "${NEAM_STATE_BACKEND:-sqlite}"

[telemetry]
sampling-rate = "${NEAM_SAMPLING_RATE:-1.0}"

B.17 [dependencies] and [dev-dependencies] #

Package dependencies managed by neam-pkg. Dependencies are downloaded from the Neam package registry and linked at compile time.

toml
[dependencies]
rag-utilities = "1.2.0"
pii-guardrails = "0.8.0"
citation-tools = "1.0.0"

[dev-dependencies]
test-fixtures = "0.5.0"
mock-llm = "1.0.0"

Dependencies support three formats:

toml
[dependencies]
# Version from registry
analytics = "1.0.0"

# Git repository
ai-tools = { git = "https://github.com/neam/ai-tools", branch = "main" }

# Local path (for development)
my-utils = { path = "../my-utils" }

# Version with optional features
ml-lib = { version = "2.0.0", features = ["gpu", "advanced"] }

B.18 [test] #

Test runner configuration for neamc test.

toml
[test]
timeout = 120
parallel = true
coverage = true
coverage-threshold = 80
include = ["tests/**/*.neam"]
exclude = ["tests/integration/**"]
Key Type Default Description
timeout number 60 Maximum execution time per test file in seconds.
parallel bool false Run test files in parallel.
coverage bool false Enable code coverage collection.
coverage-threshold number 0 Minimum coverage percentage required for neamc test to pass (0 = no minimum).
include list of strings ["tests/**/*.neam"] Glob patterns for test file discovery.
exclude list of strings [] Glob patterns to exclude from test discovery.

B.19 [profile.*] #

Build profiles control optimization and debug settings. Three built-in profiles are available: dev, release, and bench.

toml
[profile.dev]
optimization = 0
debug = true
source-maps = true

[profile.release]
optimization = 2
debug = false
source-maps = false
strip = true

[profile.bench]
optimization = 2
debug = false
source-maps = false
Key Type Default (dev) Default (release) Description
optimization number 0 2 Optimization level: 0 (none), 1 (basic), 2 (full).
debug bool true false Include debug information in bytecode.
source-maps bool true false Generate source maps for debugger support.
strip bool false true Strip symbol names from bytecode (reduces file size).

The profile is selected with neamc build --release or neamc build --profile bench. The dev profile is used by default.


B.20 [scripts] #

Named scripts that can be executed with neamc run-script.

toml
[scripts]
seed = "neam seed_data.neamb"
migrate = "neam migrate.neamb"
eval = "neam-gym --agent main.neam --dataset eval/tests.jsonl --runs 3"
lint = "neamc lint src/"

Scripts are executed in the project root directory. They can reference any installed CLI tool and accept additional arguments from the command line:

bash
neamc run-script eval -- --verbose
Start typing to search...