Programming Neam
📖 17 min read

Chapter 12: Tools and Function Calling #

"An agent without tools is just a chatbot."

Why This Matters #

Think of an agent without tools like giving a new employee a desk, a computer, and a job title -- but no software, no login credentials, no phone, and no access to any internal systems. They might be brilliant at reasoning and writing, but they cannot actually do anything. Tools and skills are the equipment you hand to your agents. A weather bot without an HTTP tool is like a meteorologist without a weather station. A file assistant without filesystem access is like a librarian locked out of the library. The quality of your agent is bounded by the quality of the tools you give it.


In the previous two chapters, you learned to declare agents and connect them to various LLM providers. Those agents can answer questions, generate text, and analyze images. But they cannot do anything. They cannot search the web, read files, query databases, call APIs, or perform calculations with guaranteed accuracy. For that, agents need tools.

Tools are the bridge between an agent's language understanding and the real world. In this chapter, you will learn how to define tools, connect them to agents, control their execution through budgets and guards, and integrate with the Model Context Protocol (MCP) for interoperability with external tool servers.


What Are Tools? #

A tool is a capability that an agent can invoke. When an agent has access to tools, the flow changes from a simple prompt-response cycle to a more powerful loop:

  1. The agent receives a user prompt.
  2. The agent decides whether it needs to use a tool to answer the question.
  3. If yes, the agent specifies which tool to call and with what parameters.
  4. The Neam VM executes the tool and returns the result to the agent.
  5. The agent incorporates the tool result into its response.

This is called function calling in the OpenAI ecosystem and tool use in the Anthropic ecosystem. In Neam, the concept is unified under the tool keyword.

User
Agent
LLM
Tool
User
Agent
LLM
Tool

Tool Definition Syntax #

📝 Note

The preferred keyword for defining agent capabilities is skill (instead of tool). Both keywords work identically -- skill is simply the modern convention. All examples in this chapter using tool remain valid. New examples use skill.

In Neam, tools are declared at the top level using the tool keyword:

neam
tool WebSearch {
  description: "Search the web for information"
  params: [
    { name: "query", schema: { "type": "string" } }
  ]
  impl(query) {
    let result = http_get("https://api.search.com/?q=" + query);
    return result;
  }
}

Let us break this down:

Parameter Types #

The schema field uses JSON Schema types:

Type Description Example
"string" Text value "hello"
"number" Numeric value 42, 3.14
"integer" Integer value 42
"boolean" True or false true
"object" Nested structure {"key": "value"}
"array" List of values [1, 2, 3]

Multi-Parameter Tools #

Tools can accept multiple parameters:

neam
tool Calculator {
  description: "Perform a mathematical calculation"
  params: [
    { name: "operation", schema: { "type": "string" } },
    { name: "a", schema: { "type": "number" } },
    { name: "b", schema: { "type": "number" } }
  ]
  impl(operation, a, b) {
    if (operation == "add") { return a + b; }
    if (operation == "subtract") { return a - b; }
    if (operation == "multiply") { return a * b; }
    if (operation == "divide") {
      if (b == 0) { return "Error: division by zero"; }
      return a / b;
    }
    return "Unknown operation: " + operation;
  }
}

Parameter Enums #

You can restrict parameter values to a predefined set using the enum field in the JSON Schema. This helps the LLM choose valid values:

neam
tool UnitConverter {
  description: "Convert between units of measurement"
  params: [
    { name: "value", schema: { "type": "number" } },
    { name: "from_unit", schema: {
      "type": "string",
      "enum": ["meters", "feet", "kilometers", "miles"]
    }},
    { name: "to_unit", schema: {
      "type": "string",
      "enum": ["meters", "feet", "kilometers", "miles"]
    }}
  ]
  impl(value, from_unit, to_unit) {
    // Convert to meters first, then to target unit
    let in_meters = value;
    if (from_unit == "feet") { in_meters = value * 0.3048; }
    if (from_unit == "kilometers") { in_meters = value * 1000; }
    if (from_unit == "miles") { in_meters = value * 1609.34; }

    let result = in_meters;
    if (to_unit == "feet") { result = in_meters / 0.3048; }
    if (to_unit == "kilometers") { result = in_meters / 1000; }
    if (to_unit == "miles") { result = in_meters / 1609.34; }

    return str(value) + " " + from_unit + " = " + str(result) + " " + to_unit;
  }
}

When the LLM sees the enum constraint, it knows exactly which values are valid, reducing errors and improving tool invocation accuracy.


Connecting Tools to Agents #

Tools are connected to agents through the skills field:

neam
tool Calculator {
  description: "Perform mathematical calculations. Supports add, subtract, multiply, divide."
  params: [
    { name: "operation", schema: { "type": "string" } },
    { name: "a", schema: { "type": "number" } },
    { name: "b", schema: { "type": "number" } }
  ]
  impl(operation, a, b) {
    if (operation == "add") { return a + b; }
    if (operation == "subtract") { return a - b; }
    if (operation == "multiply") { return a * b; }
    if (operation == "divide") {
      if (b == 0) { return "Error: division by zero"; }
      return a / b;
    }
    return "Unknown operation";
  }
}

agent MathBot {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "You are a math assistant. Use the Calculator tool for all arithmetic.
           Never attempt mental math -- always use the tool."
  skills: [Calculator]
}

{
  let response = MathBot.ask("What is 1847 * 293?");
  emit response;
}

When the agent receives the question "What is 1847 * 293?", the following happens:

  1. The VM sends the prompt to the LLM along with the tool definition (name, description, parameters).
  2. The LLM decides to call the Calculator tool with operation: "multiply", a: 1847, b: 293.
  3. The VM executes the impl block, computing 1847 * 293 = 541,171.
  4. The VM sends the result back to the LLM.
  5. The LLM formats a natural language response incorporating the result.
💡 Tip

The tool's description is critical. If the LLM does not understand when to use a tool, it will either ignore it or misuse it. Write descriptions as if explaining the tool to a colleague.

Common Mistake: Overly Vague Tool Descriptions

One of the most frequent mistakes is writing a tool description like "Does stuff" or "General helper". The LLM has only the description text to decide whether to call your tool. A vague description means the LLM either never calls the tool (because it cannot tell when it is relevant) or calls it at the wrong time (because it is guessing).

Bad: description: "Utility function"

Good: description: "Convert a temperature value from Celsius to Fahrenheit. Accepts a numeric temperature in Celsius and returns the equivalent in Fahrenheit."

The description should answer three questions: (1) What does it do? (2) What input does it expect? (3) What output does it return? If your description does not answer all three, revise it.


Practical Tool Examples #

File Reader Tool #

neam
tool FileReader {
  description: "Read the contents of a file from the local filesystem"
  params: [
    { name: "path", schema: { "type": "string" } }
  ]
  impl(path) {
    try {
      let content = file_read_string(path);
      return content;
    } catch (err) {
      return "Error reading file: " + err;
    }
  }
}

agent FileAssistant {
  provider: "openai"
  model: "gpt-4o"
  system: "You help users understand the contents of their files.
           Use the FileReader tool to read files when asked."
  skills: [FileReader]
}

{
  let response = FileAssistant.ask("What is in the file ./README.md?");
  emit response;
}

HTTP API Tool #

neam
tool WeatherAPI {
  description: "Get the current weather for a city"
  params: [
    { name: "city", schema: { "type": "string" } }
  ]
  impl(city) {
    let url = "https://wttr.in/" + city + "?format=%C+%t";
    try {
      let result = http_get(url);
      return result;
    } catch (err) {
      return "Weather service unavailable: " + err;
    }
  }
}

agent WeatherBot {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "You are a weather assistant. Use the WeatherAPI tool to get
           current weather data. Report temperatures and conditions clearly."
  skills: [WeatherAPI]
}

{
  let response = WeatherBot.ask("What is the weather in Tokyo?");
  emit response;
}

JSON Processing Tool #

neam
tool JSONProcessor {
  description: "Parse and query JSON data. Extract specific fields from JSON strings."
  params: [
    { name: "json_string", schema: { "type": "string" } },
    { name: "field", schema: { "type": "string" } }
  ]
  impl(json_string, field) {
    try {
      let data = json_parse(json_string);
      return str(data[field]);
    } catch (err) {
      return "Error processing JSON: " + err;
    }
  }
}
🎯 Try It Yourself #1

Modify the WeatherAPI tool above to accept a second parameter units with an enum of ["celsius", "fahrenheit"]. Update the URL to include the units preference, and connect the modified tool to an agent. Ask the agent: "What is the weather in Berlin in Fahrenheit?" and verify that the units parameter is passed correctly.


The skill Keyword #

The skill keyword is the preferred way to define agent capabilities. The skill and tool keywords are fully interchangeable -- any program using tool will continue to work. The name skill reflects current terminology across the AI community: what was once called a "tool" is now more commonly called a "skill," emphasizing that these are reusable capabilities an agent learns to use, not just functions it calls.

Basic skill Syntax #

The skill keyword supports a simplified parameter syntax using { name: type } pairs instead of the verbose JSON Schema array:

neam
skill get_weather {
  description: "Get current weather for a city"
  params: { city: string }
  impl(city) {
    let url = f"https://wttr.in/{city}?format=j1";
    try {
      let result = http_get(url);
      return result;
    } catch (err) {
      return f"Weather service unavailable: {err}";
    }
  }
}

Notice the differences from the tool syntax:

Feature tool (classic) skill (preferred)
Keyword tool skill
Naming convention PascalCase (WebSearch) snake_case (web_search)
Parameter syntax params: [ { name: "x", schema: {...} } ] params: { x: string }
String interpolation Concatenation ("hello " + name) f-strings (f"hello {name}")
Behavior Identical Identical

The simplified params syntax maps types as follows:

Simplified type Equivalent JSON Schema
string { "type": "string" }
number { "type": "number" }
integer { "type": "integer" }
boolean { "type": "boolean" }
string[] { "type": "array", "items": { "type": "string" } }
number[] { "type": "array", "items": { "type": "number" } }

Multi-Parameter skill Example #

neam
skill calculate {
  description: "Perform a mathematical operation on two numbers"
  params: { operation: string, a: number, b: number }
  impl(operation, a, b) {
    if (operation == "add") { return a + b; }
    if (operation == "subtract") { return a - b; }
    if (operation == "multiply") { return a * b; }
    if (operation == "divide") {
      if (b == 0) { return f"Error: division by zero"; }
      return a / b;
    }
    return f"Unknown operation: {operation}";
  }
}

agent math_bot {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "You are a math assistant. Always use the calculate skill for arithmetic."
  skills: [calculate]
}

{
  let answer = math_bot.ask("What is 256 * 789?");
  emit answer;
}
🎯 Try It Yourself #2

Rewrite the FileReader tool from the "Practical Tool Examples" section using the skill keyword and simplified params syntax. Rename it to read_file following the snake_case convention. Connect it to an agent and verify it works identically to the original.


External Skills (extern skill) #

So far, every tool and skill has included an impl block -- Neam code that runs when the skill is invoked. But many real-world tools do not need custom implementation logic. They call an HTTP endpoint, delegate to an MCP server, or invoke a built-in capability of the underlying LLM provider. Writing boilerplate impl blocks for these cases is tedious and error-prone.

The extern skill declaration solves this by letting you bind a skill directly to an external system. Instead of an impl block, you provide a binding block that tells the Neam VM how to execute the skill.

Neam supports three binding types: http, mcp, and claude_builtin.

HTTP Binding #

The http binding maps a skill directly to an HTTP API call. The Neam VM handles request construction, header injection, response parsing, and timeouts:

neam
extern skill get_weather {
  description: "Get current weather for a city"
  params: { city: string }
  binding: http {
    method: "GET"
    url: "https://wttr.in/{city}?format=j1"
    headers: ["Accept: application/json"]
    response_path: "/current_condition/0/weatherDesc/0/value"
    timeout: 5000
  }
}

The binding: http block supports these fields:

Field Type Description
method string HTTP method: "GET", "POST", "PUT", "DELETE"
url string URL template. Parameter names in {braces} are substituted.
headers string[] Request headers in "Key: Value" format.
body string Request body template (for POST/PUT). Supports {param} substitution.
response_path string JSON Pointer path to extract from the response. If omitted, the full response body is returned.
timeout integer Request timeout in milliseconds. Default: 10000.

No impl block is needed -- the VM generates the implementation from the binding declaration.

MCP Binding #

The mcp binding delegates skill execution to a declared MCP server. This is useful when you want to give individual skills descriptive names or custom descriptions while still routing execution through MCP:

neam
mcp_server filesystem {
  command: "npx"
  args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
}

extern skill read_file {
  description: "Read a file from the filesystem"
  params: { path: string }
  binding: mcp {
    server: "filesystem"
    tool: "read_file"
  }
}

The binding: mcp block supports these fields:

Field Type Description
server string Name of a declared mcp_server.
tool string The tool name as exposed by the MCP server.

This gives you fine-grained control over which MCP tools are exposed to which agents, rather than exposing every tool from every MCP server.

Claude Built-in Binding #

The claude_builtin binding maps a skill to one of Anthropic's built-in tool types (such as the bash tool, text editor, or computer use). This is only available when using the anthropic provider:

neam
extern skill bash_tool {
  description: "Execute bash commands"
  params: { command: string }
  binding: claude_builtin {
    type: "bash_20241022"
  }
}

The binding: claude_builtin block has one field:

Field Type Description
type string The Anthropic tool type identifier (e.g., "bash_20241022", "text_editor_20241022", "computer_20241022").

Connecting External Skills to Agents #

External skills are connected to agents exactly like regular skills -- through the skills field:

neam
agent assistant {
  provider: "anthropic"
  model: "claude-sonnet-4-20250514"
  system: "You are a helpful assistant with access to weather data,
           files, and bash commands."
  skills: [get_weather, read_file, bash_tool]
}

The agent does not know (or care) whether a skill is implemented with an impl block or an extern binding. From the LLM's perspective, all skills look the same.


The sensitive Flag #

Some skills perform destructive or irreversible operations -- deleting records, sending emails, transferring funds. You want these operations to require explicit approval before execution, rather than being invoked automatically by the LLM.

The sensitive flag marks a skill as requiring confirmation:

neam
skill delete_record {
  description: "Delete a database record"
  params: { id: string }
  sensitive: true
  impl(id) {
    // Requires explicit approval before execution
    return db_delete(id);
  }
}

When a skill is marked sensitive: true, the Neam VM will:

  1. Pause execution when the LLM requests this skill.
  2. Present the skill call (name and parameters) to the user or calling system for approval.
  3. Only execute the skill if approval is granted.
  4. Return a "rejected" result to the LLM if approval is denied.

This works with both skill and extern skill declarations:

neam
extern skill send_email {
  description: "Send an email to a recipient"
  params: { to: string, subject: string, body: string }
  sensitive: true
  binding: http {
    method: "POST"
    url: "https://api.mail.example.com/send"
    headers: ["Authorization: Bearer {env.MAIL_TOKEN}"]
    body: "{\"to\": \"{to}\", \"subject\": \"{subject}\", \"body\": \"{body}\"}"
  }
}
💡 Tip

When in doubt, mark a skill as sensitive. It is much better to ask for confirmation on a harmless operation than to silently execute a destructive one.


Guard and Budget Declarations for Skills #

In production systems, you need more than just the sensitive flag. You need systematic controls over what skills can do, what data they can see, and how many resources they can consume. Neam provides first-class guard, guardchain, and budget declarations that compose cleanly with skills and agents.

Defining a Guard #

A guard is a named block that intercepts skill inputs, skill outputs, or both:

neam
guard ToolGuard {
  description: "Safety guard for tool execution"
  on_tool_input(input) { return input; }
  on_tool_output(output) { return output; }
}

Guards can modify data (e.g., redact sensitive fields), pass it through unchanged, or block it entirely by returning "block".

Guard Chains #

Multiple guards can be composed into a guardchain. Guards in the chain execute in order -- each guard receives the output of the previous one:

neam
guardchain ToolChain = [ToolGuard];

A more realistic chain might include several guards:

neam
guard InputSanitizer {
  description: "Sanitize inputs before tool execution"
  on_tool_input(input) {
    // Strip potentially dangerous characters
    return input;
  }
}

guard OutputRedactor {
  description: "Redact sensitive data from tool output"
  on_tool_output(output) {
    if (output.contains("SSN")) {
      return "[REDACTED]";
    }
    return output;
  }
}

guardchain SafetyChain = [InputSanitizer, OutputRedactor];

Budget Declarations #

A budget declaration defines resource limits as a named, reusable block:

neam
budget AgentBudget {
  api_calls: 30
  tokens: 300000
}

Budget fields:

Field Type Description
api_calls integer Maximum number of LLM API calls.
tokens integer Maximum total tokens (input + output) consumed.

Composing Guards and Budgets with Agents #

Guards and budgets are attached to agents alongside skills:

neam
guard ToolGuard {
  description: "Safety guard for tool execution"
  on_tool_input(input) { return input; }
  on_tool_output(output) { return output; }
}

guardchain ToolChain = [ToolGuard];

budget AgentBudget {
  api_calls: 30
  tokens: 300000
}

agent MyAgent {
  provider: "openai"
  model: "gpt-4o"
  skills: [get_weather, read_file]
  guards: [ToolChain]
  budget: AgentBudget
}

This gives you a clean, declarative way to enforce safety and resource policies across your entire agent system. Guards and budgets are covered in much greater detail in Chapter 14 (Guards and Safety) and Chapter 17 (Autonomous Agents).

🎯 Try It Yourself #3

Write a guard called DeleteBlocker that blocks any skill input containing the word "delete" (case-insensitive). Create a guardchain with this guard, attach it to an agent that has a delete_record skill, and verify that the agent cannot execute deletions.


Capability Declarations #

Agents can declare their capabilities explicitly, which is useful for documentation, discovery, and the Agent-to-Agent protocol (Chapter 18):

neam
agent ResearchBot {
  provider: "openai"
  model: "gpt-4o"
  system: "You are a research assistant. Use tools to find information."
  skills: [WebSearch, FileReader, Calculator]
}

The skills list serves two purposes:

  1. Tool registration: The VM makes these tools available to the LLM during inference.
  2. Capability advertisement: When the agent is exposed via the A2A protocol, clients can discover what the agent can do.

Budget Costs Per Tool #

In production systems, you may want to limit how much agents spend on tool execution. Different tools have different cost profiles -- a web search API call might cost money, while a local file read is free.

Neam supports budget constraints at the agent level:

neam
agent BudgetedAgent {
  provider: "openai"
  model: "gpt-4o-mini"
  system: "You are a research assistant."
  skills: [WebSearch, Calculator]

  budget: {
    max_daily_calls: 100
    max_daily_cost: 5.0
    max_daily_tokens: 50000
  }
}

The inline budget fields control daily limits for that agent:

Field Type Description
max_daily_calls int Maximum total LLM calls per day
max_daily_cost float Maximum cost in USD per day
max_daily_tokens int Maximum tokens consumed per day
📝 Note

Inline budget: { ... } inside an agent declaration uses max_daily_* field names. Standalone budget blocks (see Chapter 14) use a different convention: api_calls, tokens, and cost_usd. The standalone form defines reusable, named budget resources that can be shared across agents.

When a limit is reached, subsequent agent calls will fail with a budget exceeded error. This prevents runaway costs in autonomous agents (covered in Chapter 17).


Tool Guards: Input and Output Validation #

Tools can be protected by guards that validate inputs before execution and outputs before returning results. This is essential for security:

neam
guard PathValidator {
  description: "Ensures file paths are within allowed directories"

  on_tool_input(input) {
    if (input.contains("..")) {
      emit "[Guard] Blocked path traversal attempt";
      return "block";
    }
    if (input.contains("/etc/")) {
      emit "[Guard] Blocked access to system directory";
      return "block";
    }
    return input;
  }
}

guard SensitiveDataFilter {
  description: "Redacts sensitive patterns from tool output"

  on_tool_output(output) {
    if (output.contains("password")) {
      return "[REDACTED: sensitive data]";
    }
    if (output.contains("api_key")) {
      return "[REDACTED: sensitive data]";
    }
    return output;
  }
}

Guards are covered in full detail in Chapter 14. The key point here is that tool guards sit between the agent and the tool, inspecting and potentially modifying or blocking data flow in both directions.


Skills: Agent Capabilities #

The skills field connects tools to agents, making them available during inference:

neam
tool SearchTool {
  description: "Search for information"
  params: [ { name: "query", schema: { "type": "string" } } ]
  impl(query) {
    return http_get("https://api.search.com?q=" + query);
  }
}

tool CalcTool {
  description: "Perform arithmetic"
  params: [
    { name: "expression", schema: { "type": "string" } }
  ]
  impl(expression) {
    // Simplified: in practice, parse the expression
    return "Result of " + expression;
  }
}

tool FileTool {
  description: "Read a local file"
  params: [ { name: "path", schema: { "type": "string" } } ]
  impl(path) {
    return file_read_string(path);
  }
}

// Agent with multiple skills
agent ResearchAssistant {
  provider: "openai"
  model: "gpt-4o"
  temperature: 0.3
  system: "You are a research assistant with access to search, calculator,
           and file reading tools. Use the appropriate tool for each task.
           Prefer using tools over guessing."
  skills: [SearchTool, CalcTool, FileTool]
}

{
  // The agent decides which tool to use based on the question
  let r1 = ResearchAssistant.ask("What is 2^32?");
  emit "Math: " + r1;
  emit "";

  let r2 = ResearchAssistant.ask("Read the file ./config.json and summarize it.");
  emit "File: " + r2;
}

MCP Integration #

The Model Context Protocol (MCP) is an open standard for connecting LLMs to external tools and data sources. Instead of defining tools directly in your Neam program, you can connect to an MCP server that provides tools dynamically.

┌───────────────────────────────────────────────────────────────────┐
│                                                                   │
│  ┌────────────────┐     ┌───────────┐     ┌──────────────────┐   │
│  │  Neam Agent     │     │  Neam VM  │     │  MCP Server      │   │
│  │                 │     │           │     │                  │   │
│  │ mcp_servers:    │     │ Discovers │     │ Exposes tools:   │   │
│  │ [MyMCPServer]   │────>│ tools via │────>│ - search         │   │
│  │                 │     │ MCP proto │     │ - database       │   │
│  │                 │     │           │     │ - file_system    │   │
│  │                 │<────│ Executes  │<────│ - api_call       │   │
│  │                 │     │ tools     │     │                  │   │
│  └────────────────┘     └───────────┘     └──────────────────┘   │
│                                                                   │
│  Benefits:                                                        │
│  - Tools defined once, shared across agents                      │
│  - External tool servers (any language)                          │
│  - Dynamic tool discovery                                        │
│  - Standard protocol for interoperability                        │
│                                                                   │
└───────────────────────────────────────────────────────────────────┘

Declaring MCP Servers #

MCP servers are declared at the top level using the mcp_server keyword. Neam supports two transport types: stdio (local process) and sse (HTTP Server-Sent Events).

stdio transport -- The MCP server runs as a local process:

neam
mcp_server GitHub {
  transport: "stdio"
  command: "npx"
  args: ["-y", "@modelcontextprotocol/server-github"]
  env: { GITHUB_TOKEN: env("GITHUB_TOKEN") }
}

sse transport -- The MCP server runs as a remote HTTP service:

neam
mcp_server Postgres {
  transport: "sse"
  url: "http://localhost:3001/sse"
}

Connecting Agents to MCP Servers #

Once declared, MCP servers are connected to agents through the mcp_servers field:

neam
agent DevAssistant {
  provider: "openai"
  model: "gpt-4o"
  system: "You help with development tasks. Use available tools
           for GitHub operations and database queries."
  mcp_servers: [GitHub, Postgres]
}

{
  let response = DevAssistant.ask("List my open pull requests on the neam-lang repo.");
  emit response;
}

When the agent starts, the VM:

  1. Connects to each MCP server.
  2. Calls the tools/list method to discover available tools.
  3. Registers discovered tools as if they were declared locally.
  4. Routes LLM tool calls to the appropriate MCP server.

This means the agent can use MCP-provided tools exactly like native Neam tools -- the LLM sees the same tool descriptions and parameters regardless of where the tool runs.

Why MCP Matters #

📝 Note

MCP server configuration can also be managed through neam.toml. See Appendix B for the full configuration reference.

Bulk Tool Import with adopt #

When an MCP server exposes many tools and you want to make all (or a filtered subset) available to an agent without listing each one individually, use the adopt syntax:

neam
mcp_server filesystem {
  command: "npx"
  args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
}

// Adopt ALL tools from the filesystem MCP server
adopt filesystem.*;

// Or adopt specific tools by name
adopt filesystem.{read_file, write_file, list_directory};

The adopt keyword imports MCP tools as if they were locally declared skills. Once adopted, they appear in the agent's tool list just like any other skill:

neam
agent FileBot {
  provider: "openai"
  model: "gpt-4o"
  system: "You help with file operations."
  skills: [read_file, write_file, list_directory]
}

The adopt syntax supports three forms:

Syntax Effect
adopt server_name.*; Import all tools from the MCP server
adopt server_name.{tool_a, tool_b}; Import specific tools by name
adopt server_name.* as prefix_; Import all tools with a name prefix

This is especially useful when integrating with MCP servers that expose dozens of tools (such as database servers or cloud platform connectors). Instead of writing an extern skill declaration for each one, a single adopt statement brings them all into scope.


Structured Output with output_type #

Sometimes you need an agent to return data in a specific format -- not free-form text, but a structured map with known fields. The output_type field enforces this:

neam
agent SentimentAnalyzer {
  provider: "openai"
  model: "gpt-4o-mini"
  temperature: 0.1
  system: "Analyze the sentiment of the given text."
  output_type: {
    "sentiment": "string",
    "confidence": "number",
    "explanation": "string"
  }
}

{
  let result = SentimentAnalyzer.ask("I absolutely love this new programming language!");
  emit "Sentiment: " + result.sentiment;
  emit "Confidence: " + str(result.confidence);
  emit "Explanation: " + result.explanation;
}

When output_type is set, the LLM is constrained to return JSON matching the specified schema. The VM parses the JSON automatically and returns a Neam map instead of a raw string. This is essential for agents whose output feeds into programmatic logic rather than being displayed to a user.

When to Use Structured Output #

Scenario Use output_type?
Chatbot responding to users No -- free-form text is fine
Classification agent Yes -- return {category, confidence}
Data extraction pipeline Yes -- return structured fields
Agent feeding data to another agent Yes -- structured data is easier to process
Router deciding which agent to call Yes -- return {route, reason}

Native Functions for Tool Development #

When writing tool implementations, you have access to Neam's built-in functions for I/O, HTTP, and data processing. Here are the most useful ones for tool development:

Function Description Example
http_get(url) HTTP GET request http_get("https://api.example.com/data")
http_post(url, body) HTTP POST request http_post(url, json_stringify(data))
json_parse(str) Parse JSON string to map/list json_parse('{"key": "value"}')
json_stringify(val) Convert map/list to JSON string json_stringify(my_map)
file_read_string(path) Read file contents as string file_read_string("./data.txt")
file_write_string(path, data) Write string to file file_write_string("./out.txt", result)
len(val) Length of string, list, or map len("hello")5
str(val) Convert any value to string str(42)"42"
num(val) Convert string to number num("42")42
sleep(ms) Pause execution sleep(1000) → waits 1 second

Example: A Tool Using HTTP and JSON #

neam
tool GitHubUser {
  description: "Look up a GitHub user's profile by username"
  params: [
    { name: "username", schema: { "type": "string" } }
  ]
  impl(username) {
    try {
      let url = "https://api.github.com/users/" + username;
      let raw = http_get(url);
      let data = json_parse(raw);
      return {
        "name": data.name,
        "bio": data.bio,
        "public_repos": data.public_repos,
        "followers": data.followers
      };
    } catch (err) {
      return "Error fetching user: " + str(err);
    }
  }
}

Tool Call Tracing #

When tracing is enabled (via neam.toml or a runner), every tool call is automatically logged with its inputs, outputs, and timing:

text
Tool call: Calculator
  Input: {operation: "multiply", a: 1847, b: 293}
  Output: 541171
  Duration: 0ms

Tool call: WebSearch
  Input: {query: "current weather in Tokyo"}
  Output: "Partly cloudy, 18C"
  Duration: 324ms

This tracing is invaluable for debugging agent behavior -- you can see exactly which tools the LLM chose to call, what parameters it passed, and what results were returned. Combined with the LLM call traces from Chapter 11, you get a complete picture of every step in your agent's reasoning.


Complete Example: Research Assistant with Tools #

Here is a comprehensive example combining multiple tools, an agent with skills, and error handling:

neam
// A research assistant with search, calculation, and file reading capabilities

tool WebSearch {
  description: "Search the web for current information on any topic"
  params: [
    { name: "query", schema: { "type": "string" } }
  ]
  impl(query) {
    try {
      let result = http_get("https://api.search.com/?q=" + query);
      return result;
    } catch (err) {
      return "Search unavailable: " + err;
    }
  }
}

tool Calculator {
  description: "Perform precise mathematical calculations. Supports add, subtract, multiply, divide operations."
  params: [
    { name: "operation", schema: { "type": "string" } },
    { name: "a", schema: { "type": "number" } },
    { name: "b", schema: { "type": "number" } }
  ]
  impl(operation, a, b) {
    if (operation == "add") { return str(a + b); }
    if (operation == "subtract") { return str(a - b); }
    if (operation == "multiply") { return str(a * b); }
    if (operation == "divide") {
      if (b == 0) { return "Error: division by zero"; }
      return str(a / b);
    }
    return "Unknown operation: " + operation;
  }
}

tool FileReader {
  description: "Read the contents of a local file. Provide the file path."
  params: [
    { name: "path", schema: { "type": "string" } }
  ]
  impl(path) {
    try {
      let content = file_read_string(path);
      if (len(content) > 5000) {
        return content.substring(0, 5000) + "\n... [truncated]";
      }
      return content;
    } catch (err) {
      return "Error reading file: " + err;
    }
  }
}

agent ResearchBot {
  provider: "openai"
  model: "gpt-4o"
  temperature: 0.3
  system: "You are a thorough research assistant. You have access to web search,
           a calculator, and file reading tools.

           Guidelines:
           - Always use the Calculator for math -- never do mental arithmetic.
           - Use WebSearch for any factual questions about current events.
           - Use FileReader when the user asks about file contents.
           - Cite your sources when using search results.
           - If a tool fails, acknowledge the failure and try an alternative approach."
  skills: [WebSearch, Calculator, FileReader]
}

{
  emit "=== Research Assistant ===";
  emit "";

  // Test 1: Calculation
  let r1 = ResearchBot.ask("What is the compound interest on $10,000 at 5% for 3 years?");
  emit "Calculation: " + r1;
  emit "";

  // Test 2: File reading
  let r2 = ResearchBot.ask("Summarize the contents of ./README.md");
  emit "File summary: " + r2;
  emit "";

  // Test 3: General question (may use search or direct knowledge)
  let r3 = ResearchBot.ask("What are the three laws of thermodynamics?");
  emit "Answer: " + r3;

  emit "";
  emit "=== Demo Complete ===";
}

Tool Design Best Practices #

  1. Write clear descriptions. The LLM decides when to use a tool based solely on its description. "Search the web for current information on any topic" is better than "Search."

  2. Use specific parameter names. city is better than input. file_path is better than param1.

  3. Handle errors in impl. Tools should never crash. Always wrap external calls in try/catch and return helpful error messages.

  4. Limit tool output size. LLMs have context limits. If a tool might return very large outputs (like reading a file), truncate the result.

  5. One responsibility per tool. A tool should do one thing well. If you need search AND calculation, create two separate tools rather than one "do everything" tool.

  6. Test tools independently. Before connecting a tool to an agent, test its impl function directly to verify it works correctly.


Summary #

In this chapter, you learned:

In the next chapter, we will take agent collaboration to the next level with multi-agent orchestration -- handoffs, runners, and coordination patterns.


Exercises #

Exercise 12.1: String Processing Tool Define a tool called TextAnalyzer with the following capabilities: given a string, return a map containing its length, word count, and the number of sentences (defined as substrings ending with ., ?, or !). Connect it to an agent and test it.

Exercise 12.2: Multi-Tool Agent Create three tools: ToUpperCase (converts text to uppercase), ReverseString (reverses a string), and WordCount (counts words in text). Connect all three to a single agent and ask it questions that require using different tools.

Exercise 12.3: API Integration Write a tool called GitHubRepos that uses http_get() to fetch the public repositories of a given GitHub username from the GitHub API (https://api.github.com/users/{username}/repos). Parse the JSON response and return a formatted list of repository names. Handle errors for invalid usernames.

Exercise 12.4: Tool with Validation Create a DivisionTool that divides two numbers. Add input validation in the impl block to check for division by zero and non-numeric inputs. Test it by asking the agent to divide by zero and observe the error handling.

Exercise 12.5: Chained Tools Design a two-tool system: FetchURL (fetches content from a URL) and Summarizer (a tool that calls a second agent to summarize text). Connect both tools to a primary agent and ask it: "Fetch the content of https://example.com and summarize it." Observe how the agent chains the two tools.

Exercise 12.6: Tool Description Experiment Create the same tool twice with different descriptions -- one vague ("Does stuff") and one specific ("Calculates the area of a circle given a radius in centimeters"). Connect each to a separate agent with the same system prompt. Ask both "What is the area of a circle with radius 5cm?" and compare which agent successfully uses the tool.

Exercise 12.7: External Skill with HTTP Binding Write an extern skill called github_profile with an HTTP binding that fetches a GitHub user's public profile from https://api.github.com/users/{username}. Use response_path to extract the user's name, or return the full response if you prefer. Set a timeout of 5000 milliseconds. Connect the skill to an agent and ask it: "Tell me about the GitHub user torvalds."

Exercise 12.8: Guard and Budget Composition Write a guard called DeleteBlocker that inspects skill inputs and blocks any call where the input contains the word "delete" (hint: use input.contains("delete")). Chain it into a guardchain. Then declare a budget with a limit of 10 API calls. Attach both the guard chain and the budget to an agent that has a delete_record skill and a read_record skill. Verify that read_record calls succeed but delete_record calls are blocked by the guard.

Start typing to search...