Chapter 18: Agent-to-Agent Protocol #
"The value of an agent ecosystem increases with the square of the number of agents that can discover and communicate with each other."
In the previous chapters, every agent we have built runs within a single Neam program on a single machine. When one agent hands off to another, the handoff happens through in-process function calls managed by the VM. This works well for monolithic applications, but production systems often require agents to communicate across processes, machines, and organizations.
The Agent-to-Agent (A2A) protocol, originally proposed by Google, solves this problem. A2A defines a standard way for agents to:
- Discover each other through published capability cards.
- Communicate via a well-defined JSON-RPC 2.0 interface.
- Manage tasks through a lifecycle of submission, execution, and result retrieval.
- Stream results in real time via Server-Sent Events (SSE).
Neam implements A2A as a first-class feature. You declare an agent card directly in your
Neam program, and the neam-api server automatically exposes the discovery endpoint,
JSON-RPC dispatch, and SSE streaming. This chapter covers the A2A protocol from concept
through implementation.
18.1 What Is A2A? #
A2A is a protocol for inter-agent communication. Think of it as a REST API standard specifically designed for AI agents. Just as the OpenAPI specification lets HTTP services describe their endpoints, A2A lets agents describe their capabilities, accept tasks, and return results through a standardized interface.
+------------------------------------------------------------------+
| Client Agent Remote Agent Server |
| (or Application) |
| | |
| | 1. GET /.well-known/agent.json |
| |-----------------------------------------------> |
| | |
| | 2. Agent Card (capabilities, schemas, endpoint) |
| |<----------------------------------------------- |
| | |
| | 3. POST /a2a (JSON-RPC: tasks/send) |
| |-----------------------------------------------> |
| | |
| | 4. Task ID + initial status |
| |<----------------------------------------------- |
| | |
| | 5. POST /a2a (JSON-RPC: tasks/get) |
| |-----------------------------------------------> |
| | |
| | 6. Task result (status: completed) |
| |<----------------------------------------------- |
| | |
| | --- OR --- |
| | |
| | 5. GET /a2a/tasks/{id}/stream (SSE) |
| |-----------------------------------------------> |
| | |
| | 6. event: status data: {"status": "running"} |
| | 6. event: chunk data: {"text": "partial..."} |
| | 6. event: done data: {"status": "completed"} |
| |<------- (streaming) ---------------------------- |
+------------------------------------------------------------------+
Why A2A? #
| Without A2A | With A2A |
|---|---|
| Custom HTTP APIs for each agent | Standard protocol for all agents |
| No discoverability | .well-known/agent.json auto-discovery |
| Ad-hoc task management | Formal task lifecycle (send/get/cancel) |
| Polling for results | SSE streaming built in |
| Tight coupling between agents | Loose coupling through capability matching |
| Organization-internal only | Cross-organization agent collaboration |
18.2 Agent Cards #
An agent card is a JSON document that describes an agent's capabilities, input/output schemas, and connection details. It is the A2A equivalent of a business card -- it tells other agents what this agent can do and how to talk to it.
Declaring an Agent Card in Neam #
agent DataAnalyzer {
provider: "openai"
model: "gpt-4o"
system: "You are a specialized agent for data analysis and visualization.
Provide clear insights and recommendations."
card {
version: "1.0"
description: "Data analysis agent that processes datasets and produces insights."
capabilities: ["analysis", "visualization", "trend-detection", "statistics"]
input_schema: [
{ name: "data", type: "array", required: true },
{ name: "analysis_type", type: "string", required: false }
]
output_schema: [
{ name: "result", type: "object", required: true },
{ name: "confidence", type: "number", required: true }
]
}
}
Agent Card Fields #
| Field | Type | Required | Description |
|---|---|---|---|
version |
string | Yes | Card version (e.g., "1.0") |
description |
string | Yes | Human-readable description of the agent's purpose |
capabilities |
list of strings | Yes | Tags describing what the agent can do |
input_schema |
list of field descriptors | No | Expected input fields |
output_schema |
list of field descriptors | No | Fields in the output |
endpoint_url |
string | No | Override the default endpoint URL |
authentication |
string | No | Auth type: "bearer", "api_key", "none" |
Field Descriptors #
Each entry in input_schema and output_schema is a field descriptor:
| Property | Type | Description |
|---|---|---|
name |
string | Field name |
type |
string | Data type: "string", "number", "boolean", "array", "object" |
required |
bool | Whether the field is required |
18.3 The .well-known/agent.json Endpoint #
When you start a Neam program with A2A enabled, the server automatically publishes the
agent card at /.well-known/agent.json. This is a standard discovery endpoint -- any
client that knows the server's URL can fetch the card to learn about the agent's
capabilities.
Starting the A2A Server #
# Step 1: Compile the agent program
./neamc my_agent.neam -o my_agent.neamb
# Step 2: Start the API server with A2A enabled
./neam-api --port 9090 --agent-file my_agent.neamb --a2a
Discovering the Agent #
curl http://localhost:9090/.well-known/agent.json
Response:
{
"name": "DataAnalyzer",
"version": "1.0",
"description": "Data analysis agent that processes datasets and produces insights.",
"capabilities": ["analysis", "visualization", "trend-detection", "statistics"],
"input_schema": {
"data": { "type": "array", "required": true },
"analysis_type": { "type": "string", "required": false }
},
"output_schema": {
"result": { "type": "object", "required": true },
"confidence": { "type": "number", "required": true }
}
}
Any other agent, application, or service can read this endpoint to understand:
- What the agent does (description)
- What it can handle (capabilities)
- What input it expects (input_schema)
- What output it produces (output_schema)
18.4 Task Lifecycle #
A2A defines a formal task lifecycle. Tasks progress through a series of states:
Task States #
| State | Description |
|---|---|
pending |
Task has been received but not yet started |
running |
Agent is actively processing the task |
completed |
Task finished successfully, result available |
cancelled |
Task was cancelled by the client |
failed |
Task encountered an error |
18.5 JSON-RPC 2.0 Protocol #
A2A communication uses JSON-RPC 2.0 over HTTP POST. All task operations go through
the /a2a endpoint.
Sending a Task #
curl -X POST http://localhost:9090/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"agent": "DataAnalyzer",
"input": {
"data": [100, 150, 120, 200],
"analysis_type": "trend"
}
},
"id": 1
}'
Response:
{
"jsonrpc": "2.0",
"result": {
"task_id": "task-abc-123",
"status": "pending",
"agent": "DataAnalyzer"
},
"id": 1
}
Getting Task Status #
curl -X POST http://localhost:9090/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks/get",
"params": {
"task_id": "task-abc-123"
},
"id": 2
}'
Response (completed):
{
"jsonrpc": "2.0",
"result": {
"task_id": "task-abc-123",
"status": "completed",
"agent": "DataAnalyzer",
"output": {
"result": {
"trend": "upward",
"growth_rate": 0.26,
"summary": "Revenue shows an upward trend with 26% overall growth."
},
"confidence": 0.87
}
},
"id": 2
}
Cancelling a Task #
curl -X POST http://localhost:9090/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks/cancel",
"params": {
"task_id": "task-abc-123"
},
"id": 3
}'
Response:
{
"jsonrpc": "2.0",
"result": {
"task_id": "task-abc-123",
"status": "cancelled"
},
"id": 3
}
JSON-RPC Methods #
| Method | Description | Required Params |
|---|---|---|
tasks/send |
Submit a new task | agent, input |
tasks/get |
Get task status and result | task_id |
tasks/cancel |
Cancel a running task | task_id |
Error Responses #
JSON-RPC errors follow the standard format:
{
"jsonrpc": "2.0",
"error": {
"code": -32600,
"message": "Invalid Request",
"data": "Agent 'UnknownAgent' not found"
},
"id": 1
}
| Code | Meaning |
|---|---|
| -32700 | Parse error (invalid JSON) |
| -32600 | Invalid request (missing required fields) |
| -32601 | Method not found |
| -32602 | Invalid params |
| -32000 | Agent not found |
| -32001 | Task not found |
| -32002 | Task already cancelled |
18.6 SSE Streaming #
For long-running tasks, clients can stream results in real time using Server-Sent
Events (SSE). Instead of polling tasks/get, the client opens a persistent HTTP
connection and receives events as they occur.
Streaming a Task #
curl http://localhost:9090/a2a/tasks/task-abc-123/stream
Event stream:
event: status
data: {"task_id": "task-abc-123", "status": "running"}
event: chunk
data: {"text": "Analyzing the data series: [100, 150, 120, 200]..."}
event: chunk
data: {"text": "Identified an upward trend with 26% overall growth."}
event: done
data: {"task_id": "task-abc-123", "status": "completed", "output": {...}}
SSE Event Types #
| Event | Description |
|---|---|
status |
Task status change (pending -> running, etc.) |
chunk |
Partial result (streaming text) |
tool_call |
Agent invoked a tool during processing |
error |
An error occurred during processing |
done |
Task completed, final result included |
Consuming SSE in Code #
While the examples above use curl, a production client would consume SSE events
programmatically. Here is a conceptual Neam example that simulates the pattern:
agent ClientAgent {
provider: "openai"
model: "gpt-4o-mini"
system: "You coordinate tasks with remote agents."
}
{
// In a real scenario, you would use http_request() to call the A2A endpoint
let task_request = {
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"agent": "DataAnalyzer",
"input": {
"data": [100, 150, 120, 200],
"analysis_type": "trend"
}
},
"id": 1
};
let response = http_request(
"POST",
"http://remote-server:9090/a2a",
json_stringify(task_request),
{"Content-Type": "application/json"}
);
let result = json_parse(response["body"]);
let task_id = result["result"]["task_id"];
emit "Task submitted: " + task_id;
// Poll for completion (max 30 attempts)
for attempt in range(30) {
time_sleep(1000);
let status_request = {
"jsonrpc": "2.0",
"method": "tasks/get",
"params": { "task_id": task_id },
"id": 2
};
let status_response = http_request(
"POST",
"http://remote-server:9090/a2a",
json_stringify(status_request),
{"Content-Type": "application/json"}
);
let status = json_parse(status_response["body"]);
emit "Status: " + str(status["result"]["status"]);
if (status["result"]["status"] == "completed") {
emit "Result: " + str(status["result"]["output"]);
break;
}
}
}
18.7 A2A Endpoints Reference #
The neam-api server with --a2a flag exposes three endpoints:
| Endpoint | Method | Description |
|---|---|---|
/.well-known/agent.json |
GET | Agent card discovery |
/a2a |
POST | JSON-RPC 2.0 dispatch (tasks/send, tasks/get, tasks/cancel) |
/a2a/tasks/{id}/stream |
GET | SSE task result streaming |
In addition, the standard neam-api endpoints remain available:
| Endpoint | Method | Description |
|---|---|---|
/api/v1/health |
GET | Health check |
/api/v1/agents |
GET | List available agents |
/api/v1/agent/ask |
POST | Direct agent query (non-A2A) |
Server Command Line #
./neam-api --host 0.0.0.0 --port 9090 --agent-file my_agents.neamb --a2a
| Option | Default | Description |
|---|---|---|
--host |
0.0.0.0 |
Host to bind to |
--port |
8080 |
Port to listen on |
--agent-file |
-- | Compiled agent bundle (.neamb) |
--a2a |
disabled | Enable A2A protocol endpoints |
18.8 Inter-Organization Agent Communication #
A2A is designed for agents that span organizational boundaries. Consider a scenario where Company A has a data analysis agent and Company B has a report generation agent. With A2A, they can collaborate without sharing code, credentials, or infrastructure.
Discovery-Based Routing #
// A client that discovers and routes to the best available agent
agent Coordinator {
provider: "openai"
model: "gpt-4o-mini"
system: "You coordinate tasks by discovering and delegating to specialized agents."
}
fun discover_agent(base_url) {
let response = http_get(base_url + "/.well-known/agent.json");
return json_parse(response["body"]);
}
fun has_capability(card, capability) {
let caps = card["capabilities"];
for cap in caps {
if (cap == capability) {
return true;
}
}
return false;
}
fun send_task(base_url, agent_name, input_data) {
let request = {
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"agent": agent_name,
"input": input_data
},
"id": 1
};
let response = http_request(
"POST",
base_url + "/a2a",
json_stringify(request),
{"Content-Type": "application/json"}
);
return json_parse(response["body"]);
}
{
// Discover agents from multiple organizations
let agent_registry = [
"https://analytics.company-a.com",
"https://reports.company-b.com",
"https://ml.company-c.com"
];
for (i, url) in enumerate(agent_registry) {
let card = discover_agent(url);
emit "Discovered: " + card["name"];
emit " Capabilities: " + str(card["capabilities"]);
// Route based on capability
if (has_capability(card, "analysis")) {
emit " -> Sending analysis task";
let result = send_task(
url,
card["name"],
{ "data": [100, 150, 120, 200] }
);
emit " -> Task ID: " + result["result"]["task_id"];
}
}
}
18.9 Building an A2A Agent: Complete Walkthrough #
Let us build a complete A2A agent from scratch.
Step 1: Define the Agent with a Card #
agent RefundProcessor {
provider: "openai"
model: "gpt-4o-mini"
system: "You process refund requests. Verify order details, apply company
refund policies, and provide a clear decision with reasoning.
Always include the refund amount and processing timeline."
card {
version: "1.0"
description: "Processes refund requests according to company policy."
capabilities: ["refund-processing", "order-verification", "policy-enforcement"]
input_schema: [
{ name: "order_id", type: "string", required: true },
{ name: "reason", type: "string", required: true },
{ name: "amount", type: "number", required: true }
]
output_schema: [
{ name: "decision", type: "string", required: true },
{ name: "refund_amount", type: "number", required: true },
{ name: "timeline", type: "string", required: true },
{ name: "reasoning", type: "string", required: true }
]
}
}
Step 2: Compile and Start #
# Compile
./neamc refund_agent.neam -o refund_agent.neamb
# Start with A2A enabled
./neam-api --port 9090 --agent-file refund_agent.neamb --a2a
Step 3: Discover #
curl http://localhost:9090/.well-known/agent.json | python3 -m json.tool
Step 4: Submit a Task #
curl -X POST http://localhost:9090/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"agent": "RefundProcessor",
"input": {
"order_id": "ORD-2024-5678",
"reason": "Product arrived damaged",
"amount": 49.99
}
},
"id": 1
}'
Step 5: Retrieve the Result #
curl -X POST http://localhost:9090/a2a \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tasks/get",
"params": { "task_id": "task-001" },
"id": 2
}'
18.10 Multiple Agents on One Server #
A single neam-api instance can host multiple agents, each with its own card. The
agent card for the server includes all agents, and tasks/send routes to the correct
one based on the agent parameter.
agent RefundAgent {
provider: "openai"
model: "gpt-4o-mini"
system: "Process refund requests."
card {
version: "1.0"
description: "Refund processing agent"
capabilities: ["refund-processing"]
input_schema: [
{ name: "order_id", type: "string", required: true },
{ name: "reason", type: "string", required: true }
]
output_schema: [
{ name: "decision", type: "string", required: true }
]
}
}
agent AnalyticsAgent {
provider: "openai"
model: "gpt-4o"
system: "Analyze data and produce insights."
card {
version: "1.0"
description: "Data analytics agent"
capabilities: ["analysis", "statistics", "visualization"]
input_schema: [
{ name: "data", type: "array", required: true }
]
output_schema: [
{ name: "insights", type: "object", required: true }
]
}
}
agent CodeReviewAgent {
provider: "openai"
model: "gpt-4o"
system: "Review code for issues, security vulnerabilities, and suggest improvements."
card {
version: "1.0"
description: "Code review agent"
capabilities: ["code-review", "security-analysis", "best-practices"]
input_schema: [
{ name: "code", type: "string", required: true },
{ name: "language", type: "string", required: false }
]
output_schema: [
{ name: "issues", type: "array", required: true },
{ name: "suggestions", type: "array", required: true }
]
}
}
./neamc multi_agent_a2a.neam -o multi_agent_a2a.neamb
./neam-api --port 9090 --agent-file multi_agent_a2a.neamb --a2a
Clients can now discover all three agents and route tasks to each one.
18.11 Security Considerations #
A2A agents are HTTP services exposed to the network. Security must be considered at multiple levels.
Authentication #
The agent card's authentication field declares the expected auth mechanism:
agent SecureAgent {
provider: "openai"
model: "gpt-4o"
system: "You are a secure agent."
card {
version: "1.0"
description: "Secure agent with bearer token auth"
capabilities: ["analysis"]
authentication: "bearer"
}
}
| Auth Type | Description |
|---|---|
"none" |
No authentication required (development only) |
"bearer" |
Requires Authorization: Bearer <token> header |
"api_key" |
Requires X-API-Key header |
Network Security #
- Use HTTPS in production. Never expose A2A endpoints over plain HTTP on a public network.
- Firewall rules. Restrict access to A2A ports to known client IPs or VPN ranges.
- Rate limiting. Implement rate limiting to prevent abuse or runaway client agents.
Input Validation #
Always validate incoming task inputs against the declared input_schema. The A2A server
validates required fields automatically, but you should also validate data types and
ranges in your agent's system prompt or through guardrails.
guard InputValidator {
description: "Validates A2A task input"
on_observation(text) {
if (len(text) > 10000) {
return false; // Reject excessively long inputs
}
return true;
}
}
Budget Controls #
For autonomous A2A agents, budget limits prevent runaway cost if a client sends excessive tasks:
agent BudgetedAgent {
provider: "openai"
model: "gpt-4o-mini"
system: "Process requests within budget."
budget: {
max_daily_calls: 500
max_daily_cost: 50.0
max_daily_tokens: 500000
}
card {
version: "1.0"
description: "Budget-controlled agent"
capabilities: ["general"]
}
}
18.12 A2A with Capability-Based Routing #
A common pattern is a coordinator agent that discovers remote agents via their
.well-known/agent.json endpoints and routes tasks based on advertised
capabilities. This example uses the std::agents::a2a::client module to
communicate over the A2A protocol:
import std::agents::a2a::client;
// Remote A2A server URLs (each runs neam-api --a2a)
let servers = [
"https://data-team.internal:9090",
"https://support-team.internal:9091",
"https://eng-team.internal:9092"
];
// Step 1: Discover all remote agents and index by capability
let registry = {};
for (server in servers) {
let card = client::discover(server);
emit "Discovered: " + card["name"] + " at " + server;
for (cap in card["capabilities"]) {
registry[cap] = {
"name": card["name"],
"url": server
};
}
}
// Step 2: Route tasks to the agent with the matching capability
fun route_to_capable_agent(capability, task_input) {
emit " Looking for agent with capability: " + capability;
if (has_key(registry, capability)) {
let entry = registry[capability];
emit " Found: " + entry["name"] + " at " + entry["url"];
let task_id = client::send_task(entry["url"], entry["name"], task_input);
let result = client::await_result(task_id, 30000);
return result;
}
emit " No agent found for capability: " + capability;
return nil;
}
// Step 3: Dispatch tasks over A2A
{
emit "=== A2A Capability-Based Routing ===";
emit "";
emit "--- Task 1: Data Analysis ---";
let r1 = route_to_capable_agent("data-analysis", {
"query": "Analyze: Q1=$100k, Q2=$150k, Q3=$120k, Q4=$200k. What patterns?"
});
emit " Result: " + str(r1);
emit "";
emit "--- Task 2: Refund ---";
let r2 = route_to_capable_agent("refund-processing", {
"query": "Refund for order #12345, $49.99, item arrived damaged"
});
emit " Result: " + str(r2);
emit "";
emit "--- Task 3: Code Review ---";
let r3 = route_to_capable_agent("code-review", {
"query": "Review: function add(a, b) { return a + b; }"
});
emit " Result: " + str(r3);
emit "";
emit "=== Routing Complete ===";
}
This example demonstrates the full A2A flow: discovery via agent cards, capability
indexing, and task dispatch over JSON-RPC. Each remote server is a separate
neam-api --a2a process that could be running on a different machine, in a
different cloud, or even maintained by a different team.
18.13 Task Pipelines #
Tasks can be chained into pipelines where the output of one task becomes the input of the next:
agent Extractor {
provider: "openai"
model: "gpt-4o"
system: "Extract all dates and monetary values from text."
}
agent Analyzer {
provider: "openai"
model: "gpt-4o"
system: "Analyze extracted data and provide a summary."
}
{
emit "=== A2A Task Pipeline ===";
emit "";
// Task 1: Extract
emit "Step 1: Extracting key information...";
let extracted = Extractor.ask(
"Contract signed March 15, 2024 for $10,000 with first payment of $2,500 due April 1, 2024"
);
emit " Extracted: " + extracted;
emit "";
// Task 2: Analyze (uses output of Task 1)
emit "Step 2: Analyzing extracted data...";
let analysis = Analyzer.ask("Based on this: " + extracted + " - What is the payment schedule?");
emit " Analysis: " + analysis;
emit "";
emit "=== Pipeline Complete ===";
}
In a distributed A2A scenario, each step would be a tasks/send call to a different
remote agent, with the result of each task fed as input to the next.
18.14 A2A Standard Library Modules #
The Neam standard library (std.agents.a2a) provides programmatic building blocks for
working with agent cards, tasks, and the JSON-RPC protocol directly in your Neam code.
Agent Card Builder (std.agents.a2a.card) #
Build and validate agent cards programmatically instead of using the declarative card
block:
import std::agents::a2a::card;
let my_card = card::create_card("DataProcessor", "1.0", "Processes datasets.");
my_card = card::with_capabilities(my_card, ["analysis", "statistics", "visualization"]);
my_card = card::with_input_schema(my_card, [
{ "name": "data", "type": "array", "required": true },
{ "name": "format", "type": "string", "required": false }
]);
my_card = card::with_output_schema(my_card, [
{ "name": "result", "type": "object", "required": true }
]);
my_card = card::with_authentication(my_card, "bearer", {});
// Validate and export
let is_valid = card::validate_card(my_card);
let card_json = card::to_json(my_card);
// Discovery URL helper
let discovery_url = card::get_discovery_url("https://api.example.com");
// Returns: "https://api.example.com/.well-known/agent.json"
This is useful when you need to construct agent cards dynamically -- for example, when an agent's capabilities depend on which tools or knowledge bases are loaded at startup.
Task Helpers (std.agents.a2a.task) #
The task module provides status constants and helper functions for managing the task lifecycle programmatically:
import std::agents::a2a::task;
// Create and submit a task
let t = task::create_task("DataProcessor", { "data": [1, 2, 3] });
t = task::submit(t);
emit "Task ID: " + t["id"];
emit "Status: " + task::get_status(t); // "pending"
// Complete or fail
t = task::complete(t, { "result": { "mean": 2.0 } });
// or: t = task::fail(t, "Input validation error");
// Convert to JSON-RPC response
let rpc_response = task::to_jsonrpc_response(t);
The module also provides an await_completion helper for polling:
// Wait up to 30 seconds, polling every 1 second
let result = task::await_completion(task_id, 30000, 1000);
JSON-RPC Utilities (std.agents.a2a.jsonrpc) #
Build and parse JSON-RPC 2.0 messages:
import std::agents::a2a::jsonrpc;
let request = jsonrpc::create_request("tasks/send", {
"agent": "DataProcessor",
"input": { "data": [1, 2, 3] }
}, 1);
let response = jsonrpc::create_response(1, { "task_id": "task-001" });
let error = jsonrpc::create_error(1, -32600, "Invalid Request", nil);
A2A Client (std.agents.a2a.client) #
A high-level client for interacting with remote A2A servers:
import std::agents::a2a::client;
// Discover a remote agent
let remote_card = client::discover("https://api.partner.com");
emit "Found: " + remote_card["name"];
// Submit a task
let task_id = client::send_task("https://api.partner.com", "DataProcessor", {
"data": [100, 200, 300]
});
// Wait for result
let result = client::await_result(task_id, 30000);
emit "Result: " + str(result);
18.15 OAuth2 and Advanced Authentication #
Production A2A deployments often require more sophisticated authentication than bearer tokens. The A2A protocol supports OAuth2 for machine-to-machine authentication between agent services.
agent SecureAnalyzer {
provider: "openai"
model: "gpt-4o"
system: "Secure data analysis agent."
card {
version: "1.0"
description: "Secure analysis agent with OAuth2"
capabilities: ["analysis"]
authentication: "oauth2"
}
}
| Auth Type | Header | Use Case |
|---|---|---|
"none" |
-- | Development and internal testing |
"bearer" |
Authorization: Bearer <token> |
Simple token-based auth |
"api_key" |
X-API-Key: <key> |
Third-party integrations |
"oauth2" |
Authorization: Bearer <jwt> |
Machine-to-machine, cross-organization |
For OAuth2, the client must obtain an access token from the authorization server before calling the A2A endpoint. The token is included as a Bearer token in the Authorization header.
18.16 Interoperability: Calling A2A from Other Languages #
A2A agents are standard HTTP services, which means any language or framework can interact with them. The Neam project provides helper libraries for common languages.
Python #
import requests
# Discover
card = requests.get("http://localhost:9090/.well-known/agent.json").json()
print(f"Agent: {card['name']}, Capabilities: {card['capabilities']}")
# Submit task
response = requests.post("http://localhost:9090/a2a", json={
"jsonrpc": "2.0",
"method": "tasks/send",
"params": {
"agent": card["name"],
"input": {"data": [100, 150, 120, 200]}
},
"id": 1
})
task_id = response.json()["result"]["task_id"]
JavaScript / TypeScript #
// Discover
const card = await fetch("http://localhost:9090/.well-known/agent.json")
.then(r => r.json());
// Submit task
const response = await fetch("http://localhost:9090/a2a", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
jsonrpc: "2.0",
method: "tasks/send",
params: { agent: card.name, input: { data: [100, 150, 120, 200] } },
id: 1
})
}).then(r => r.json());
// Stream results via SSE
const events = new EventSource(
`http://localhost:9090/a2a/tasks/${response.result.task_id}/stream`
);
events.addEventListener("done", (e) => console.log(JSON.parse(e.data)));
C API #
The Neam C API allows embedding the Neam runtime directly in C/C++ applications:
#include "neam.h"
NeamRuntime* runtime = neam_runtime_new();
neam_load_bytecode(runtime, "my_agents.neamb");
const char* result = neam_agent_ask(runtime, "DataAnalyzer", "Analyze [1,2,3]");
printf("Result: %s\n", result);
neam_runtime_free(runtime);
This is particularly useful for embedding Neam agents within existing C++ services, game engines, or edge devices where a full HTTP server is impractical.
Because A2A uses standard JSON-RPC 2.0 over HTTP, any language with an HTTP client can interact with a Neam A2A server. The protocol is intentionally framework-agnostic.
Summary #
In this chapter you learned:
- What A2A is: A protocol for agent discovery, communication, and task management across processes and organizations.
- Agent cards: How to declare capability descriptions, input/output schemas, and
authentication requirements using the
cardblock. - Discovery: The
.well-known/agent.jsonendpoint for automatic agent discovery. - Task lifecycle: The formal progression from pending through running to completed, cancelled, or failed.
- JSON-RPC 2.0: How to submit tasks (
tasks/send), check status (tasks/get), and cancel tasks (tasks/cancel) via the/a2aendpoint. - SSE streaming: How to stream task results in real time.
- Security: Authentication, network security, input validation, and budget controls.
- Capability-based routing: Discovering agents and routing tasks based on declared capabilities.
- Task pipelines: Chaining tasks where each output feeds the next input.
- A2A standard library modules: Programmatic card building, task helpers with
await_completion, JSON-RPC utilities, and a high-level A2A client. - OAuth2 authentication: Machine-to-machine authentication for cross-organization agent communication.
- Interoperability: Calling A2A agents from Python, JavaScript/TypeScript, and C, plus embedding the Neam runtime via the C API.
Exercises #
Exercise 18.1: Agent Card Design #
Design agent cards for three agents in a hypothetical e-commerce system: 1. An inventory lookup agent 2. A pricing agent 3. A recommendation agent
For each agent, define appropriate capabilities, input_schema, and output_schema. Write the complete Neam declarations.
Exercise 18.2: A2A Server Deployment #
Compile the refund agent from Section 18.9 and start it as an A2A server. Use curl
to:
1. Discover the agent card.
2. Submit a refund task.
3. Poll for the task result.
Document the full request/response cycle.
Exercise 18.3: Multi-Agent A2A Server #
Build a Neam program with at least three agents, each with distinct capabilities and
agent cards. Start the A2A server and:
1. Discover all agents from the /.well-known/agent.json endpoint.
2. Submit a task to each agent.
3. Verify that each task routes to the correct agent.
Exercise 18.4: Task Pipeline #
Implement a three-step task pipeline where: 1. Agent A extracts structured data from unstructured text. 2. Agent B analyzes the extracted data. 3. Agent C generates a human-readable report from the analysis.
Each agent should have a card declaring its capabilities. Demonstrate the pipeline with a concrete input (e.g., a paragraph from a news article).
Exercise 18.5: Capability-Based Router #
Build a coordinator agent that:
1. Maintains a registry of remote agent URLs.
2. Discovers each agent's capabilities via /.well-known/agent.json.
3. Accepts a task description and capability requirement.
4. Routes the task to the most appropriate agent.
5. Returns the result.
Test with at least three different tasks that route to different agents.
Exercise 18.6: Security Hardening #
Take the A2A server from Exercise 18.2 and add: 1. Bearer token authentication (set in the agent card). 2. A guardrail that validates input length. 3. Budget limits to prevent abuse.
Document how each security measure works and demonstrate that unauthenticated or oversized requests are rejected.