Programming Neam
📖 17 min read

Chapter 2: Installation and Setup #

"Before you can run, you must build."

In this chapter, you will install Neam's dependencies, build the toolchain from source, verify the build, configure your editor for Neam development, and set up the environment variables needed to connect to LLM providers. By the end, you will have a working Neam installation and will compile and run your first program.

💡 Tip

For a guided quick-start experience, visit https://neam.lovable.app/ -- it walks you through installation and getting your first Neam program running.


2.1 What You Are Building #

Before diving into installation steps, it is worth understanding exactly what the Neam build process produces. Unlike Python-based agent frameworks where you pip install a library and remain inside the Python ecosystem, building Neam gives you a complete, self-contained toolchain -- nine executables and a shared library that together form the most comprehensive compiled infrastructure for AI agent development available today.

2.1.1 The Neam Toolchain at a Glance #

Executable Purpose Typical Size
neamc Compiler: translates .neam source to .neamb bytecode ~3.5 MB
neam Virtual machine: executes .neamb bytecode files ~5.6 MB
neam-cli Interactive REPL with watch mode for rapid development ~3.2 MB
neam-api HTTP API server exposing agents as REST endpoints ~4.1 MB
neam-pkg Package manager for dependency resolution and publishing ~2.8 MB
neam-lsp Language Server Protocol server for IDE integration ~3.0 MB
neam-dap Debug Adapter Protocol server for step-through debugging ~2.9 MB
neam-forge CLI runner for forge agent build-verify loops ~3.3 MB
neam-gym Evaluation framework with 5 grading strategies ~3.1 MB
libneam Shared library with 53 exported C functions ~3.8 MB

The total native binary footprint for the entire Neam toolchain is approximately 9.1 MB for a deployment bundle (the VM plus a compiled .neamb file). To put this in perspective:

Deployment Target Neam Python + LangChain Ratio
Native binary 9.1 MB N/A --
Docker image (minimal) 32 MB 1,400 MB 43x smaller
AWS Lambda package 14 MB 280 MB 20x smaller
WASM module 2.4 MB N/A --

These numbers are not theoretical -- they come from measured benchmarks comparing equivalent B5-class agent programs (six agents, five handoffs, RAG, voice, and MCP integration). The dramatic size difference is a direct consequence of Neam's compiled architecture: there is no interpreter, no package dependency tree, no virtual environment overhead.

2.1.2 Startup Performance #

The deployment size advantage translates directly into startup speed, which is critical for serverless functions, edge computing, and autoscaled Kubernetes pods:

Scenario Neam Python + LangChain Speedup
Compile + first agent call 32 ms 2,800 ms 87x
Cached bytecode + first agent call 9 ms 2,800 ms 311x

Python's startup cost comes from importing the interpreter, loading the framework module tree (LangChain alone pulls in hundreds of transitive dependencies), and initializing the runtime. Neam's VM loads pre-compiled bytecode and begins execution immediately -- the entire startup path is a single mmap of the .neamb file followed by a jump to the entry point.

2.1.3 The Standard Library #

Beyond the core toolchain, Neam includes a standard library of 445+ modules organized across 27 domains. These modules are written in Neam itself and ship in the stdlib/ directory:

Domain Purpose
agents/ Pre-built agent archetypes: analyst, browser, deep search, testing, coder, document processor, memory manager, orchestrator, security auditor
ai/ AI capability trait definitions
async/ Async/await primitives
audio/ Audio processing
collections/ Data structures (lists, maps, sets)
core/ Core language functions
crypto/ Cryptographic functions
data/ Data processing
graders/ Evaluation graders for neam-gym
ingest/ Document ingestion: parsers for audio, code, images, logs, OCR, Office documents, PDFs, video; chunking strategies; embedding pipelines
io/ File and stream I/O
math/ Mathematical functions
media/ Media handling
net/ Network utilities
observability/ Logging, tracing, metrics (56 modules)
project/ Project management and deploy generators
rag/ RAG strategies: agentic, CRAG, GraphRAG, HyDE, Self-RAG; plus retriever, reranker, chain, and evaluation modules
realtime/ Real-time voice session management
schemas/ Data schema definitions
speech/ Speech processing events and trait definitions
testing/ Testing framework
text/ Text processing
time/ Time and date utilities
trust/ Guardrails and safety gate framework
vad/ Voice activity detection state machine
voice/ Voice pipeline agent and budget management

You do not need to install these separately -- they are bundled with the Neam repository and available via import in any Neam program:

neam
import "stdlib/rag/agentic.neam"
import "stdlib/agents/analyst.neam"
import "stdlib/ingest/pdf.neam"

2.1.4 What the Toolchain Provides #

The Neam compiler and runtime are written in C++20 and implement the following major subsystems:

Subsystem What It Does for You
Virtual machine Executes your compiled bytecode with 84 opcodes and automatic memory management
Parser Reads your .neam source and builds a structured representation for compilation
Compiler Validates agent declarations, resolves modules, and produces optimized bytecode
LLM adapters Uniform interface across OpenAI, Anthropic, Gemini, Ollama, Azure, Bedrock, and Vertex AI
Voice pipeline STT, TTS, and real-time WebSocket audio across 6 voice providers
MCP client Model Context Protocol integration with stdio and SSE transports
Cognitive engine Reasoning, reflection, learning, evolution, and autonomy subsystems
A2A server Agent-to-Agent protocol implementation over JSON-RPC 2.0
Standard library 445+ Neam-language modules covering agents, RAG, ingestion, voice, observability, and more

You do not need to understand the implementation details to use Neam effectively -- the toolchain handles everything behind a clean compilation interface. The build process compiles all of these subsystems into the nine executables listed in Section 2.1.1.


2.2 System Prerequisites #

Neam is written in C++20 and uses CMake as its build system. The build process automatically downloads and compiles all third-party dependencies, so you only need three things installed on your system:

Prerequisite Minimum Version Purpose
CMake 3.20+ Build system
C++ compiler C++20 (GCC 12+, Clang 15+, or MSVC 2022+) Compiles Neam's C++ source
Git Any recent version Clones the repository

Additionally, the build requires development headers for two system libraries:

Library Purpose
libcurl HTTP client for LLM API calls, package registry
OpenSSL TLS, cryptographic functions, HMAC

The following sections walk through installing these prerequisites on each platform.


2.3 macOS Setup #

macOS is Neam's primary development platform. The fastest path is to install Xcode Command Line Tools (which provides Clang) and then use Homebrew for the remaining dependencies.

Step 1: Install Xcode Command Line Tools #

Open Terminal and run:

bash
xcode-select --install

This installs Clang (Apple's C++ compiler), make, git, and other essential development tools. Follow the on-screen prompts to complete the installation.

Verify the installation:

bash
clang++ --version

You should see output indicating Apple Clang version 15 or later (which supports C++20).

Step 2: Install Homebrew #

If you do not already have Homebrew installed:

bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Step 3: Install Dependencies #

bash
brew install cmake curl openssl
📝 Note

macOS ships with a version of libcurl, but the Homebrew version includes development headers and is generally more up-to-date. CMake will find the correct version automatically.

Step 4: Verify Prerequisites #

bash
cmake --version      # Should be 3.20+
clang++ --version    # Should support C++20 (Apple Clang 15+)
git --version        # Any recent version
Xcode CLI Tools
- clang++
- make
- git

2.4 Linux Setup #

Neam builds on any Linux distribution with a C++20 compiler. The instructions below cover the two most common package managers.

Debian / Ubuntu (apt) #

bash
sudo apt update
sudo apt install -y build-essential cmake git libcurl4-openssl-dev libssl-dev

This installs GCC (with C++20 support), CMake, Git, libcurl development headers, and OpenSSL development headers.

Fedora / RHEL / CentOS (yum/dnf) #

bash
sudo dnf groupinstall -y "Development Tools"
sudo dnf install -y cmake git libcurl-devel openssl-devel

On older CentOS versions that use yum instead of dnf, replace dnf with yum.

Arch Linux (pacman) #

bash
sudo pacman -S base-devel cmake git curl openssl

Verify Prerequisites #

bash
g++ --version        # Should be GCC 12+ (or clang++ 15+)
cmake --version      # Should be 3.20+
git --version
pkg-config --libs libcurl    # Should output -lcurl
pkg-config --libs openssl    # Should output -lssl -lcrypto
💡 Tip

If your distribution ships an older version of CMake (below 3.20), you can install a newer version from the CMake website or via pip install cmake.


2.5 Windows Setup #

Neam supports two compiler toolchains on Windows: MSVC (Visual Studio) and MinGW. MSVC is recommended for the simplest experience.

Option A: Visual Studio (MSVC) #

  1. Download and install Visual Studio 2022 or later (Community edition is free). During installation, select the "Desktop development with C++" workload.

  2. Install CMake from https://cmake.org/download/ (or use the version bundled with Visual Studio).

  3. Install Git from https://git-scm.com/download/win.

  4. Install vcpkg for managing libcurl and OpenSSL:

powershell
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
.\bootstrap-vcpkg.bat
.\vcpkg install curl:x64-windows openssl:x64-windows
  1. Open a Developer Command Prompt for VS (or x64 Native Tools Command Prompt) and proceed to the build instructions in Section 2.6.

Option B: MinGW (MSYS2) #

  1. Download and install MSYS2 from https://www.msys2.org/.

  2. Open the MSYS2 MinGW 64-bit shell and install dependencies:

bash
pacman -S mingw-w64-x86_64-gcc mingw-w64-x86_64-cmake \
          mingw-w64-x86_64-curl mingw-w64-x86_64-openssl git
  1. Proceed to the build instructions in Section 2.6.
📝 Note

The uSearch HNSW vector index is automatically disabled for MinGW cross-compilation. RAG features will use a fallback linear search implementation. For full HNSW support on Windows, use MSVC.


2.6 Building from Source #

With prerequisites installed, building Neam is a three-command process.

Step 1: Clone the Repository #

bash
git clone https://github.com/neam-lang/Neam.git
cd Neam

Step 2: Configure with CMake #

bash
mkdir -p build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release

During the cmake configuration step, CMake will automatically download and build the following third-party dependencies using FetchContent:

Dependency Version Purpose
tree-sitter v0.22.6 Incremental parsing library
nlohmann/json v3.11.3 JSON parsing and serialization
json-schema-validator v2.3.0 JSON Schema validation for tool parameters
miniz v3.0.2 Compression for bytecode bundles
uSearch v2.6.0 HNSW vector index for RAG
lexbor v2.2.0 HTML parsing for web source ingestion
Google Test v1.14.0 Unit testing framework
Google Benchmark v1.8.3 Microbenchmark framework

Additionally, Neam bundles the SQLite3 amalgamation in deps/sqlite3/ for persistent memory and cognitive feature storage.

📝 Note

The first cmake invocation downloads all dependencies, which may take several minutes depending on your internet connection. Subsequent builds use cached downloads.

git clone
mkdir build
cd build
cmake ..
-DCMAKE_BUILD_
TYPE=Release
FetchContent downloads:
tree-sitter, nlohmann/json,
miniz, uSearch, lexbor,
googletest, benchmark,
json-schema-validator
cmake --build .
--parallel
Build artifacts:
neamc, neam,
neam-cli,
neam-api,
neam-pkg,
neam-lsp,
neam-dap,
neam-forge,
neam-gym,
libneam

Step 3: Build #

bash
cmake --build . --parallel

The --parallel flag uses all available CPU cores. On a modern machine, the build typically takes 2-5 minutes.

Optional CMake Flags #

Neam's build system supports several optional flags for enabling cloud backends (introduced in v0.6.5):

Flag Default Description
NEAM_USE_HNSW ON Use uSearch HNSW index for vector search
NEAM_BACKEND_POSTGRES OFF Enable PostgreSQL state backend (requires libpq)
NEAM_BACKEND_REDIS OFF Enable Redis state backend (requires hiredis)
NEAM_BACKEND_AWS OFF Enable AWS backends (DynamoDB, Bedrock)
NEAM_BACKEND_GCP OFF Enable GCP backends (Vertex AI, Firestore)
NEAM_BACKEND_AZURE OFF Enable Azure backends (CosmosDB)

Example with PostgreSQL and Redis enabled:

bash
cmake .. -DCMAKE_BUILD_TYPE=Release \
         -DNEAM_BACKEND_POSTGRES=ON \
         -DNEAM_BACKEND_REDIS=ON

For the exercises in this book, the default build (no extra flags) is sufficient.


2.7 Verifying the Build #

After a successful build, verify that all executables were produced:

bash
ls -la neamc neam neam-cli neam-api neam-pkg neam-lsp neam-dap neam-forge neam-gym

On macOS and Linux, you should also see the shared library:

bash
ls -la libneam.*    # libneam.dylib (macOS) or libneam.so (Linux)

On Windows (MSVC), the library is neam.dll.

Quick Smoke Test #

Create a file called hello.neam in the build directory (or any directory):

neam
{
  emit "Hello, Neam!";
}

Compile and run:

bash
./neamc hello.neam -o hello.neamb
./neam hello.neamb

Expected output:

text
Hello, Neam!

If you see this output, your Neam installation is working correctly.

Running the Test Suite #

To run the built-in tests and verify that the VM, compiler, and standard library are functioning correctly:

bash
ctest --output-on-failure

This runs all registered tests: vm_test, compiler_test, async_unit_test, runtime_executor_test, stdlib_list_test, stdlib_map_test, type_unification_test, test_framework_test, and io_file_test.

All tests should pass. If any test fails, check the output for details and ensure your compiler and system libraries meet the minimum version requirements.

Build Verification Benchmarks #

Once your build is complete, it is useful to verify that your installation meets the expected performance characteristics. These benchmarks were measured on representative hardware and published in the Neam architecture paper (Govindaraj et al., 2026). Your numbers may vary depending on hardware, but the relative ratios should be consistent.

Compilation performance:

Benchmark Expected What It Measures
Compile a 50-line agent program < 20 ms Lexer → Parser → AST → Bytecode pipeline
Compile a 500-line multi-agent program < 50 ms Full compilation with agent topology validation
Startup to first agent call (cached .neamb) < 10 ms VM initialization + bytecode loading
Startup to first agent call (compile + run) < 35 ms End-to-end cold path

You can measure compilation time with:

bash
time ./neamc program.neam -o program.neamb

Memory footprint (Resident Set Size):

Program Complexity Expected RSS Comparison
Single agent, no tools ~4.5 MB Python+LangChain: ~89 MB (20x larger)
4 agents, 3 handoffs ~6.2 MB Python+LangChain: ~112 MB (18x larger)
6 agents, RAG, voice, MCP ~14.8 MB Python+LangChain: ~210 MB (14x larger)

You can measure RSS with:

bash
# macOS
/usr/bin/time -l ./neam program.neamb 2>&1 | grep "maximum resident set size"

# Linux
/usr/bin/time -v ./neam program.neamb 2>&1 | grep "Maximum resident set size"

Per-turn orchestration overhead (non-LLM operations):

Operation Expected Latency
Handoff routing decision ~0.8 μs
Context serialization (1 KB) ~3.5 μs
Trace entry creation ~0.5 μs
MCP tool discovery ~2.1 μs
Total per-turn overhead ~6.9 μs

For comparison, the same operations in Python+LangChain total approximately 260 μs -- a 38x difference. At scale (50 iterations with 100 concurrent agents), this adds up: Neam completes non-LLM orchestration in 34.5 ms versus Python's 1,300 ms.

📝 Note

These benchmarks measure only the orchestration overhead -- the time spent in Neam's VM routing messages, serializing context, and creating traces. The actual LLM API call latency (typically 200-2,000 ms) is the same regardless of which language you use. The orchestration overhead matters most in high-throughput, multi-agent systems where agents make thousands of turns per minute.

Compile-Time Error Detection #

One of Neam's most significant advantages over Python-based frameworks is its ability to catch agent configuration errors at compile time, before any code runs or any LLM API calls are made. The compiler validates 12 categories of errors that Python discovers only at runtime (if at all):

Error Category Neam Python Python + mypy
Undefined agent reference in handoff Compile Runtime Runtime
Missing required agent field (provider, model) Compile Runtime Runtime
Handoff to non-existent agent Compile Runtime Runtime
Runner referencing undefined entry agent Compile Runtime Runtime
Invalid provider name Compile Runtime Runtime
Temperature out of range (0.0–2.0) Compile Runtime N/A
Tool parameter schema mismatch Compile Runtime Partial
MCP server reference to undeclared server Compile Runtime N/A
Type mismatch in expressions Compile Runtime Compile
Undefined variable Compile Runtime Compile
Unreachable agent in handoff graph Compile (warning) Never Never
Voice pipeline missing required properties Compile Runtime N/A
Total compile-time catchable 12/12 0/12 2/12

This means that when neamc compiles your program without errors, you can be confident that the agent topology is valid, all handoff targets exist, all tool schemas are well-formed, and all provider configurations are complete. You will never discover at 3 AM that a production agent fails because a handoff target was misspelled.


2.8 Setting Up Your Editor #

Neam ships with an LSP server (neam-lsp) that provides IDE support. This section covers VS Code setup, which is the most common configuration.

VS Code Setup #

  1. Install the Neam extension (if available in the marketplace), or configure a generic LSP client.

  2. Configure the LSP client. Add the following to your VS Code settings.json:

json
{
  "neam.lsp.path": "/path/to/your/build/neam-lsp",
  "files.associations": {
    "*.neam": "neam"
  }
}

Replace /path/to/your/build/neam-lsp with the actual path to the neam-lsp executable in your build directory.

  1. Configure the debugger. Create a .vscode/launch.json file in your project:
json
{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "neam",
      "request": "launch",
      "name": "Debug Neam Program",
      "program": "${workspaceFolder}/program.neamb",
      "dapExecutable": "/path/to/your/build/neam-dap"
    }
  ]
}

Other Editors #

The LSP server works with any editor that supports the Language Server Protocol:

Adding Neam to Your PATH #

For convenience, add the build directory to your shell's PATH so you can invoke Neam tools from any directory:

bash
# Add to ~/.bashrc, ~/.zshrc, or equivalent
export PATH="/path/to/Neam/build:$PATH"

After sourcing the file or opening a new terminal, you can use:

bash
neamc program.neam -o program.neamb
neam program.neamb
neam-cli --watch program.neam

2.9 Environment Variables for LLM Providers #

Neam agents connect to LLM providers via HTTP APIs. Each provider requires an API key (except Ollama, which runs locally). Neam implements a uniform adapter interface across all providers -- the same agent declaration works with any provider by changing a single line. The VM handles the protocol differences between providers so you do not have to.

2.9.1 Provider Capabilities #

Each provider adapter implements a common interface but offers different capabilities:

Provider API Endpoint Function Calling Streaming Vision Models
OpenAI /v1/chat/completions tool_calls array SSE image_url, image_base64 GPT-4o, GPT-4o-mini, o1, o3-mini
Anthropic /v1/messages tool_use blocks SSE Base64 images Claude Opus 4.6, Claude Sonnet 4.5
Gemini /v1beta/models/{m}:generateContent functionCall parts SSE Image parts Gemini 2.0, Gemini 1.5 Pro
Ollama http://localhost:11434/api/chat Tool calls Streaming -- Llama 3, Qwen 2.5, Qwen 3, Mistral, etc.
Azure OpenAI Custom endpoint tool_calls array SSE Same as OpenAI GPT-4o (Azure-hosted)
AWS Bedrock Regional endpoint Provider-specific SSE Provider-specific Claude, Llama, Titan
GCP Vertex AI Regional endpoint functionCall parts SSE Image parts Gemini (GCP-hosted)

All providers support the same Neam agent declaration syntax. Switching providers requires changing only the provider and model fields:

neam
agent MyAgent {
  provider: "anthropic"        // was: "openai"
  model: "claude-sonnet-4-20250514"  // was: "gpt-4o"
  system: "You are a helpful assistant."
}
💡 Tip

For v0.6.5 cloud-native deployments, Neam also supports AWS Bedrock, Azure OpenAI, and GCP Vertex AI as additional providers. These require cloud-specific build flags and authentication configuration, covered in Chapter 19.

2.9.2 API Keys and Authentication #

Set the following environment variables to enable the providers you intend to use:

Provider Environment Variable How to Get a Key
OpenAI OPENAI_API_KEY https://platform.openai.com/api-keys
Anthropic ANTHROPIC_API_KEY https://console.anthropic.com/
Google Gemini GEMINI_API_KEY https://aistudio.google.com/apikey
Ollama OLLAMA_HOST (optional) Install from https://ollama.com/ -- no key needed

Setting Environment Variables #

macOS / Linux (bash or zsh):

bash
# Add to ~/.bashrc, ~/.zshrc, or equivalent
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="AI..."

Source the file:

bash
source ~/.zshrc    # or source ~/.bashrc

Windows (PowerShell):

powershell
$env:OPENAI_API_KEY = "sk-..."
$env:ANTHROPIC_API_KEY = "sk-ant-..."
$env:GEMINI_API_KEY = "AI..."

For persistent environment variables on Windows, use the System Properties dialog (search for "Environment Variables" in the Start menu).

Ollama Setup #

Ollama runs LLMs locally on your machine. It requires no API key, making it ideal for development, testing, and privacy-sensitive applications.

  1. Install Ollama from https://ollama.com/

  2. Pull a model:

bash
ollama pull llama3
ollama pull qwen2.5:14b
ollama pull qwen3:1.7b          # Lightweight model for development
ollama pull nomic-embed-text    # For RAG embeddings
  1. Verify Ollama is running:
bash
curl http://localhost:11434/api/tags

By default, Ollama listens on http://localhost:11434. If you change the host or port, set the OLLAMA_HOST environment variable:

bash
export OLLAMA_HOST="http://custom-host:11434"

Testing Your Provider Setup #

Create a test file test_provider.neam:

neam
agent TestBot {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "Reply with exactly: Provider working."
}

{
  let response = TestBot.ask("Test");
  emit response;
}

Compile and run:

bash
neamc test_provider.neam -o test_provider.neamb
neam test_provider.neamb

If the output includes "Provider working" (or a similar response), your API key is configured correctly. Repeat with provider: "anthropic", provider: "gemini", and provider: "ollama" to verify each provider.

⚠️ Important

API keys are secrets. Never commit them to version control. Use environment variables or a secrets manager. Neam's v0.6.5 cloud-native features include integration with AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager for production deployments.


2.10 Using Pre-Built Binaries #

💡 Recommendation

The GitHub-based installer described below is the recommended method for production deployments and operations teams. It provides checksum-verified, auditable installs with deterministic directory layouts suitable for policy-aware environments. Building from source (Section 2.6) is recommended for development and contributing, as it gives you access to debug builds, build flags, and the test suite.

Starting with v0.6.5 (spec v0.13), Neam provides a portable installer script that downloads pre-built binaries directly from GitHub Releases with built-in checksum verification and auditable receipts. The installer follows a defense-in-depth approach: TLS-only downloads, SHA-256 verification, no curl | sh, and dry-run by default.

The installer script is located at scripts/neam-install.sh in the Neam repository.

Plan mode (dry-run) -- preview what will happen without installing:

bash
./scripts/neam-install.sh --version v0.6.5

This prints the computed download URLs, the target install directory, and the tools required for installation, without downloading or writing anything. Always run plan mode first to verify the configuration.

Execute mode -- perform the installation:

bash
./scripts/neam-install.sh --version v0.6.5 --execute

This downloads the platform-specific tarball, verifies the SHA-256 checksum against the published SHA256SUMS file, extracts the binaries to a versioned directory (default: /usr/local/neam/v0.6.5), creates symlinks for neam and neamc, and emits an install receipt for auditability. The install to /usr/local may require sudo.

Custom install directory (no sudo required):

bash
./scripts/neam-install.sh --version v0.6.5 --install-dir "$HOME/.local/neam" --execute

SHA-256 checksum verification is performed automatically during --execute mode. You can also verify manually after a download:

bash
# Download the checksum file
curl -LO https://github.com/neam-lang/Neam/releases/download/v0.6.5/SHA256SUMS

# Verify the tarball
sha256sum -c SHA256SUMS --ignore-missing

Optional GPG signature verification:

bash
./scripts/neam-install.sh --version v0.6.5 \
    --gpg-keyring /path/to/neam-maintainers.gpg \
    --execute

Key installer flags:

Flag Description
--version Release tag (e.g., v0.6.5)
--execute Perform the installation; omit for dry-run mode
--install-dir Custom installation prefix (default: /usr/local/neam)
--os / --arch Override platform detection for cross-installs
--gpg-keyring Path to a GPG keyring for signature verification
--artifact-prefix Tarball prefix (default: neam)

Directory layout after installation:

📁/usr/local/neam/
📁v0.6.5/
📁bin/
📁neam
📁neamc
📁share/neam/
📁construct-templates/
📁policy-profiles/
📁schemas/
📁skills-index/
📁receipts/
📄install-<timestamp>.json

Add the installed binaries to your PATH:

bash
# Add to ~/.bashrc, ~/.zshrc, or equivalent
export PATH="/usr/local/neam/v0.6.5/bin:$PATH"

# Or if using a custom install directory:
export PATH="$HOME/.local/neam/v0.6.5/bin:$PATH"

2.10.2 Manual Download (Alternative) #

If you prefer not to use the installer script, pre-built binaries are also available for manual download from the GitHub Releases page:

text
https://github.com/neam-lang/Neam/releases

Each release includes binaries for:

Download the archive for your platform, extract it, and add the extracted directory to your PATH:

bash
# macOS / Linux example
tar xzf neam-v0.6.5-macos-arm64.tar.gz
cd neam-v0.6.5-macos-arm64
export PATH="$(pwd):$PATH"

# Verify
neamc --version
neam --version

Pre-built binaries include all nine executables and the shared library. They are built with the default configuration (HNSW enabled, cloud backends disabled).


2.11 Troubleshooting Common Build Issues #

CMake version too old #

Symptom: CMake Error: CMake 3.20 or higher is required.

Fix: Upgrade CMake. On macOS: brew upgrade cmake. On Linux:

bash
pip install cmake --upgrade
# or download from https://cmake.org/download/

CURL not found #

Symptom: Could NOT find CURL (missing: CURL_LIBRARY CURL_INCLUDE_DIR)

Fix: Install libcurl development headers.

OpenSSL not found #

Symptom: Could NOT find OpenSSL

Fix: Install OpenSSL development headers.

C++20 not supported #

Symptom: error: 'optional' is not a member of 'std' or similar C++20 errors.

Fix: Upgrade your compiler.

uSearch compilation failure on MinGW #

Symptom: Template errors in usearch/index.hpp when cross-compiling for Windows.

Fix: This is expected. The build system automatically disables HNSW when cross-compiling for Windows with MinGW. If you see this error in a non-cross-compilation scenario, add -DNEAM_USE_HNSW=OFF to your CMake command.

Download failures during cmake #

Symptom: FetchContent fails to download dependencies.

Fix: Check your internet connection and proxy settings. If you are behind a corporate firewall, configure CMake's proxy:

bash
export http_proxy=http://proxy.example.com:8080
export https_proxy=http://proxy.example.com:8080

Alternatively, you can manually download the dependency tarballs and place them in the build/_deps/ directory before running cmake.

Permission denied when running executables #

Symptom: Permission denied when running ./neamc or ./neam on macOS or Linux.

Fix:

bash
chmod +x neamc neam neam-cli neam-api neam-pkg neam-lsp neam-dap neam-gym

macOS Gatekeeper blocks execution #

Symptom: "neamc" can't be opened because Apple cannot check it for malicious software.

Fix: Right-click the executable, select "Open", and confirm. Or remove the quarantine attribute:

bash
xattr -d com.apple.quarantine neamc neam neam-cli neam-api neam-pkg neam-lsp neam-dap neam-gym

2.12 Project Directory Structure #

When working on Neam projects, the recommended directory structure is:

text
my-agent-project/
  neam.toml              # Project manifest (created by neam-pkg init)
  src/
    main.neam            # Entry point
    agents/
      triage.neam        # Agent definitions
      specialists.neam
    tools/
      search.neam        # Tool definitions
    skills/
      summarize.neam     # Skill definitions
    guards/
      content_policy.neam  # Guard/policy definitions
    knowledge/
      docs.neam          # Knowledge base definitions
  data/
    docs/                # Documents for RAG ingestion
  tests/
    test_triage.neam     # Test files
  .neam/
    traces/              # Auto-generated trace logs
    memory/              # SQLite memory files
    bundles/             # Compiled bundles (.neamb archives)

Initialize a new project:

bash
neam-pkg init my-agent-project
cd my-agent-project

This creates the neam.toml manifest and a basic src/main.neam file.


2.13 Your First Real Program #

Let us end this chapter by building something that actually talks to an LLM. This requires at least one provider to be configured (see Section 2.9).

Create src/main.neam:

neam
agent Greeter {
  provider: "ollama"
  model: "llama3"
  system: "You are a friendly assistant. Keep your responses to one sentence."
}

{
  emit "--- Neam Agent Test ---";
  let response = Greeter.ask("What is the most interesting thing about programming languages?");
  emit "Agent says: " + response;
  emit "--- Done ---";
}
💡 Tip

If you do not have Ollama installed, change the provider to "openai" and the model to "gpt-4o-mini" (requires OPENAI_API_KEY).

Compile and run:

bash
neamc src/main.neam -o main.neamb
neam main.neamb

You should see output similar to:

text
--- Neam Agent Test ---
Agent says: The most interesting thing about programming languages is how they
shape the way we think about problems and solutions.
--- Done ---

Congratulations. You have just built and run your first AI agent in Neam.


2.14 Understanding the Cost Advantage #

Now that you have a working Neam installation, it is worth understanding the economic implications of the toolchain you just built. The following total cost of ownership (TCO) model, based on a production B5-class agent system (6 agents, 5 handoffs, RAG, voice, MCP integration, 10,000 daily interactions), quantifies the difference:

Cost Component Python + LangChain Neam Savings
Development (LOC × $/LOC) 520 LOC × $50 = $26,000 85 LOC × $50 = $4,250 84%
Infrastructure (monthly) $450 (2 GB instance) $50 (256 MB instance) 89%
Cold start penalty (serverless, monthly) $120 $8 93%
Debugging (errors/month × $/error) $2,000 $200 90%
Total monthly (after initial development) $2,570 $258 90%

The infrastructure savings come directly from Neam's memory efficiency: a Neam agent system runs comfortably in a 256 MB container, while the equivalent Python system requires a 2 GB instance. On cloud infrastructure priced per GB-hour, this is a 9x reduction in compute cost.

The debugging savings reflect the compile-time error detection discussed in Section 2.7. With Python, misconfigured handoffs, missing tool schemas, and invalid provider references surface as runtime errors -- often in production, often at 3 AM. With Neam, the compiler catches these before deployment.

📝 Note

These figures are drawn from the Neam architecture paper (Govindaraj et al., 2026). Your actual costs will vary based on cloud provider, traffic patterns, and system complexity. The relative ratios, however, are consistent across measured deployments.


2.15 What Comes Next #

The next chapter explores the Neam language in depth: variables, types, expressions, control flow, functions, and all the foundational syntax you need before building more complex agent systems.

At this point you have:

You are ready to write Neam.


Exercises #

Exercise 2.1: Build from Source #

Clone the Neam repository, build it from source, and verify that all nine executables are produced. Record:

  1. Your operating system and version
  2. Your compiler version
  3. Your CMake version
  4. The total build time (use time cmake --build . --parallel)
  5. The sizes of the neamc and neam executables

Exercise 2.2: Provider Configuration #

Set up at least two LLM providers (choose from OpenAI, Anthropic, Gemini, Ollama). Write a Neam program that queries both providers and prints their responses:

neam
agent ProviderA {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "Identify yourself as Provider A."
}

agent ProviderB {
  provider: "ollama"
  model: "llama3"
  system: "Identify yourself as Provider B."
}

{
  emit "Provider A: " + ProviderA.ask("Who are you?");
  emit "Provider B: " + ProviderB.ask("Who are you?");
}

Exercise 2.3: Watch Mode #

Use neam-cli --watch to run a program in watch mode. While it is running, modify the agent's system prompt and save the file. Observe the automatic recompilation and re-execution. Answer:

  1. How quickly does the watch mode detect changes?
  2. What happens if you introduce a syntax error?
  3. What happens if you fix the syntax error?

Exercise 2.4: Build Flags #

Rebuild Neam with NEAM_USE_HNSW=OFF and observe the difference:

bash
cmake .. -DCMAKE_BUILD_TYPE=Release -DNEAM_USE_HNSW=OFF
cmake --build . --parallel
  1. How does the build time change?
  2. Do all tests still pass?
  3. What impact does this have on RAG functionality?

Exercise 2.5: Toolchain Exploration #

Run each of the following commands and describe what they do:

bash
./neam-api --help
./neam-pkg --help
./neam-gym --help
./neam-cli --help

For neam-api, start the server on port 8080 and use curl to query the health endpoint:

bash
./neam-api --port 8080 &
curl http://localhost:8080/api/v1/health

Document the response format.

Exercise 2.6: Bytecode Inspection #

Compile the hello world program and examine the .neamb file:

bash
./neamc hello.neam -o hello.neamb
xxd hello.neamb | head -20
  1. Can you identify the NEAM magic header in the hex dump?
  2. Can you spot the string "Hello, Neam!" in the constant pool section?
  3. How large is the .neamb file compared to the .neam source?
Start typing to search...